Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods
From MaRDI portal
Publication:2445750
DOI10.1016/j.csda.2010.03.004zbMath1284.62147OpenAlexW2054440265MaRDI QIDQ2445750
Simone Borra, Agostino Di Ciaccio
Publication date: 14 April 2014
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2010.03.004
bootstrapneural networksprediction errorcross-validationleave-one-outregression treesprojection pursuit regressionoptimismcovariance penaltyextra-sample errorin-sample error
Related Items (9)
Are financial ratios relevant for trading credit risk? Evidence from the CDS market ⋮ On the usefulness of cross-validation for directional forecast evaluation ⋮ On the performance of the flexible maximum entropy distributions within partially adaptive estimation ⋮ Special issue on variable selection and robust procedures ⋮ Mean-variance-skewness-entropy measures: a multi-objective approach for portfolio selection ⋮ A Comparison of Robust Model Choice Criteria Within a Metalearning Study ⋮ Markov cross-validation for time series model evaluations ⋮ A note on the validity of cross-validation for evaluating autoregressive time series prediction ⋮ Representative random sampling: an empirical evaluation of a novel bin stratification method for model performance estimation
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Asymptotics of cross-validated risk estimation in estimator selection and performance assess\-ment
- An introduction to copulas.
- Resampling methods for variable selection in robust regression
- Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap
- Estimation of the conditional risk in classification: the swapping method
- Estimation of the mean of a multivariate normal distribution
- Multivariate adaptive regression splines
- Estimating the dimension of a model
- Heuristics of instability and stabilization in model selection
- Model selection via multifold cross validation
- Bootstrap Model Selection
- Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation
- How Biased is the Apparent Error Rate of a Prediction Rule?
- A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods
- On Measuring and Correcting the Effects of Data Mining and Model Selection
- The Little Bootstrap and Other Methods for Dimensionality Selection in Regression: X-Fixed Prediction Error
- Asymptotics for and against cross-validation
- Improvements on Cross-Validation: The .632+ Bootstrap Method
- Adaptive Model Selection
- A Comparison of Nonparametric Error Rate Estimation Methods in Classification Problems
- Linear Model Selection by Cross-Validation
- Prediction Error Estimation Under Bregman Divergence for Non‐Parametric Regression and Classification
- Some Comments on C P
- The Estimation of Prediction Error
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods