How Biased is the Apparent Error Rate of a Prediction Rule?
DOI10.2307/2289236zbMATH Open0621.62073OpenAlexW4249991467MaRDI QIDQ3757198FDOQ3757198
Authors: Bradley Efron
Publication date: 1986
Full work available at URL: https://doi.org/10.2307/2289236
Recommendations
- Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation
- On the estimation of prediction errors in logistic regression models
- Overestimation of the receiver operating characteristic curve for logistic regression
- On the biases of error estimators in prediction problems
- Assessing the performance of an allocation rule
bootstrapgeneralized linear modelAICcross validationexamplelogistic modelprediction modeloptimismNumerical resultsunderestimationprediction errorsdownward biasMallow's CpAkaike criteriumerror rate of prediction rulegeneral exponential family linear modelsmeasures of prediction errors
Point estimation (62F10) Linear regression; mixed models (62J05) Linear inference, regression (62J99) Parametric inference (62F99)
Cited In (only showing first 100 items - show all)
- Least angle regression. (With discussion)
- A study on tuning parameter selection for the high-dimensional lasso
- Multiple group linear discriminant analysis: robustness and error rate
- A Pliable Lasso
- Sparse estimation via nonconcave penalized likelihood in factor analysis model
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- New aspects of Bregman divergence in regression and classification with parametric and nonparametric estimation
- Distance-based linear discriminant analysis for interval-valued data
- The asymptotic distribution of the proportion of correct classifications for a holdout sample in logistic regression
- Local behavior of sparse analysis regularization: applications to risk estimation
- Smoothing spline ANOVA models for large data sets with Bernoulli observations and the randomized GACV.
- Using specially designed exponential families for density estimation
- Comparing and selecting spatial predictors using local criteria
- A significance test for the lasso
- Estimation of the conditional risk in classification: the swapping method
- Maximizing proportions of correct classifications in binary logistic regression
- Bootstrap variants of the Akaike information criterion for mixed model selection
- On the biases of error estimators in prediction problems
- Model selection by resampling penalization
- Modeling strategies in longitudinal data analysis: covariate, variance function and correlation structure selection
- Cross validation model selection criteria for linear regression based on the Kullback-Leibler discrepancy
- Adapting to unknown sparsity by controlling the false discovery rate
- Discussion: ``A significance test for the lasso
- Is \(C_{p}\) an empirical Bayes method for smoothing parameter choice?
- Estimating the Kullback–Liebler risk based on multifold cross‐validation
- Low complexity regularization of linear inverse problems
- On model selection via stochastic complexity in robust linear regression
- Asymptotic bootstrap corrections of AIC for linear regression models
- Modelling of insurers' rating determinants. An application of machine learning techniques and statistical models
- Additive models with trend filtering
- Estimating the accuracy of (local) cross-validation via randomised GCV choices in kernel or smoothing spline regression
- Ideal point discriminant analysis
- Tuning parameter selection in sparse regression modeling
- Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods
- Data-based interval estimation of classification error rates
- Selection criteria for scatterplot smoothers
- SURE-tuned tapering estimation of large covariance matrices
- On the association between a random parameter and an observable
- A multistage algorithm for best-subset model selection based on the Kullback-Leibler discrepancy
- Model evaluation, discrepancy function estimation, and social choice theory
- An assumption for the development of bootstrap variants of the Akaike information criterion in mixed models
- A note on the generalized degrees of freedom under the \(L_{1}\) loss function
- Statistical properties of convex clustering
- Appropriate penalties in the final prediction error criterion: A decision theoretic approach
- Un critère de choix de variables en analyse en composantes principales fondé sur des modèles graphiques gaussiens particuliers
- Efficient regularized isotonic regression with application to gene-gene interaction search
- Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks
- Nearly unbiased variable selection under minimax concave penalty
- A lasso for hierarchical interactions
- Quantifying the Predictive Performance of Prognostic Models for Censored Survival Data with Time-Dependent Covariates
- Rejoinder: ``A significance test for the lasso
- The negative correlations between data-determined bandwidths and the optimal bandwidth
- A large-sample model selection criterion based on Kullback's symmetric divergence
- A regression model selection criterion based on bootstrap bumping for use with resistant fitting.
- Degrees of freedom in lasso problems
- Discussion: ``A significance test for the lasso
- High-Dimensional Spatial Quantile Function-on-Scalar Regression
- A survey of cross-validation procedures for model selection
- Bayesian nonparametric model selection and model testing
- Reluctant generalized additive modeling
- Flexible and Interpretable Models for Survival Data
- Prediction Error Estimation Under Bregman Divergence for Non‐Parametric Regression and Classification
- Variable selection for generalized linear mixed models by \(L_1\)-penalized estimation
- A model search procedure for hierarchical models
- Bayesian comparison of latent variable models: conditional versus marginal likelihoods
- Extreme value correction: a method for correcting optimistic estimations in rule learning
- Bootstrap-based model selection criteria for beta regressions
- Recent developments in bootstrap methodology
- Efficient Computation and Model Selection for the Support Vector Regression
- Degrees of freedom in low rank matrix estimation
- Bootstrap estimation and model selection for multivariate normal mixtures using parallel computing with graphics processing units
- Assessing the performance of data assimilation algorithms which employ linear error feedback
- On the optimism correction of the area under the receiver operating characteristic curve in logistic prediction models
- Cross-Validation: What Does It Estimate and How Well Does It Do It?
- On the estimation of prediction errors in logistic regression models
- A non-convex regularization approach for stable estimation of loss development factors
- Are ordinal models useful for classification? a revised analysis
- Fused Lasso nearly-isotonic signal approximation in general dimensions
- P-splines with an \(\ell_1\) penalty for repeated measures
- Determination of different types of fixed effects in three-dimensional panels*
- Reconceptualizing the p -value from a likelihood ratio test: a probabilistic pairwise comparison of models based on Kullback-Leibler discrepancy measures
- Inference after variable selection using restricted permutation methods
- The degrees of freedom of partly smooth regularizers
- Optimal Simulator Selection
- Evaluation of generalized degrees of freedom for sparse estimation by replica method
- Asymptotic properties of a double penalized maximum likelihood estimator in logistic regres\-sion
- Evaluating the impact of exploratory procedures in regression prediction: A pseudosample approach
- Discussion of “From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation”
- Variable selection in canonical discriminant analysis for family studies
- Degrees of freedom for off-the-grid sparse estimation
- Criterion constrained Bayesian hierarchical models
- Cross-Validation for Correlated Data
- Regular, median and Huber cross‐validation: A computational comparison
- Extending AIC to best subset regression
- Model selection criteria based on cross-validatory concordance statistics
- Determination of the Selection Statistics and Best Significance Level in Backward Stepwise Logistic Regression
- Robust estimation in regression and classification methods for large dimensional data
This page was built for publication: How Biased is the Apparent Error Rate of a Prediction Rule?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3757198)