Improvements on Cross-Validation: The .632+ Bootstrap Method
From MaRDI portal
Publication:4366231
DOI10.2307/2965703zbMath0887.62044OpenAlexW4250236131WikidataQ56019665 ScholiaQ56019665MaRDI QIDQ4366231
Efron, Bradley, Robert Tibshirani
Publication date: 7 January 1998
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2965703
Related Items (84)
A new variable selection approach using random forests ⋮ A Statistical Framework for Hypothesis Testing in Real Data Comparison Studies ⋮ Least angle regression. (With discussion) ⋮ TREE-BASED REGRESSION FOR A CIRCULAR RESPONSE ⋮ Evaluation of new service development strategies using multicriteria analysis: predicting the success of innovative hospitality services ⋮ The comparison study of the model selection criteria on the Tobit regression model based on the bootstrap sample augmentation mechanisms ⋮ An alternative objective function for fitting regression trees to functional response variables ⋮ Estimation of varying coefficient models with measurement error ⋮ A proportional-hazards model for survival analysis and long-term survivors modeling: application to amyotrophic lateral sclerosis data ⋮ Resampling-based information criteria for best-subset regression ⋮ A center sliding Bayesian binary classifier adopting orthogonal polynomials ⋮ Predicting human behavior in unrepeated, simultaneous-move games ⋮ Spatial bootstrapped microeconometrics: Forecasting for out‐of‐sample geo‐locations in big data ⋮ A Clustered Gaussian Process Model for Computer Experiments ⋮ Three distributions in the extended occupancy problem ⋮ Estimation of the Spatial Weighting Matrix for Spatiotemporal Data under the Presence of Structural Breaks ⋮ Prediction of sports injuries in football: a recurrent time-to-event approach using regularized Cox models ⋮ Classifier variability: accounting for training and testing ⋮ Forecast of the higher heating value in biomass torrefaction by means of machine learning techniques ⋮ Double-bagging: Combining classifiers by bootstrap aggregation ⋮ Confidence intervals for the Cox model test error from cross‐validation ⋮ The fraud loss for selecting the model complexity in fraud detection ⋮ Assessing the variability of posterior probabilities in Gaussian model-based clustering ⋮ Improved feature selection with simulation optimization ⋮ The Lasso with general Gaussian designs with applications to hypothesis testing ⋮ Searching for the optimum value of the smoothing parameter for a radial basis function surface with feature area by using the bootstrap method ⋮ Bootstrap-based model selection criteria for beta regressions ⋮ Block-regularized repeated learning-testing for estimating generalization error ⋮ Robust Data-Driven Fault Detection in Dynamic Process Environments Using Discrete Event Systems ⋮ Model selection by resampling penalization ⋮ Applying randomness effectively based on random forests for classification task of datasets of insufficient information ⋮ Bayesian classification for bivariate normal gene expression ⋮ Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods ⋮ Estimation and status prediction in a discrete mover‐stayer model with covariate effects on stayer's probability ⋮ An overview of techniques for linking high‐dimensional molecular data to time‐to‐event endpoints by risk prediction models ⋮ Confidence scores for prediction models ⋮ The NPAIRS Computational Statistics Framework for Data Analysis in Neuroimaging ⋮ Bagging Tree Classifiers for Glaucoma Diagnosis ⋮ Multiple predictingK-fold cross-validation for model selection ⋮ From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation: Rejoinder ⋮ A method for constructing a confidence bound for the actual error rate of a prediction rule in high dimensions ⋮ Robust Prediction of t-Year Survival with Data from Multiple Studies ⋮ A Robust Alternative to the Schemper-Henderson Estimator of Prediction Error ⋮ Optimal Combinations of Diagnostic Tests Based on AUC ⋮ A kernel PLS based classification method with missing data handling ⋮ A comparison of parametric conditional error-rate estimators for the two-group linear discriminant function ⋮ Bundling classifiers by bagging trees ⋮ Modeling of the algal atypical increase in La Barca reservoir using the DE optimized least square support vector machine approach with feature selection ⋮ Bootstrap estimated true and false positive rates and ROC curve ⋮ Multiclass classification and gene selection with a stochastic algorithm ⋮ Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap ⋮ The benefit of data-based model complexity selection via prediction error curves in time-to-event data ⋮ An empirical study of PLAD regression using the bootstrap ⋮ A survey of cross-validation procedures for model selection ⋮ An improved methodology for filling missing values in spatiotemporal climate data set. Application to Tanganyika Lake data set ⋮ Selection bias in working with the top genes in supervised classification of tissue samples ⋮ Two-group classification via a biobjective margin maximization model ⋮ Stochastic optimization with adaptive restart: a framework for integrated local and global learning ⋮ An extended two-stage sequential optimization approach: properties and performance ⋮ Bandwidth choice for nonparametric classification ⋮ Bayesian nonparametric model selection and model testing ⋮ Generalized additive multi-mixture model for data mining. ⋮ Efron‐Type Measures of Prediction Error for Survival Analysis ⋮ The MELBS team winning entry for the EVA2017 competition for spatiotemporal prediction of extreme rainfall using generalized extreme value quantiles ⋮ Recent developments in bootstrap methodology ⋮ New Bootstrap Applications in Supervised Learning ⋮ Embedding sample points uncertainty measures in learning algorithms ⋮ Assessing classifiers in terms of the partial area under the ROC curve ⋮ Exact bootstrap \(k\)-nearest neighbor learners ⋮ Model Selection in Estimating Equations ⋮ Evaluating incremental values from new predictors with net reclassification improvement in survival analysis ⋮ Sample size determination for training cancer classifiers from microarray and RNA-seq data ⋮ Ensemble component selection for improving ICA based microarray data prediction models ⋮ Discrimination of psychotropic drugs over‐consumers using a threshold exceedance based approach ⋮ Meta‐learning approach to gene expression data classification ⋮ Consistent validation of gray-level thresholding image segmentation algorithms based on machine learning classifiers ⋮ Evaluating Incremental Values from New Predictors with Net Reclassification Improvement in Survival Analysis ⋮ On the predictive risk in misspecified quantile regression ⋮ OR Practice–Data Analytics for Optimal Detection of Metastatic Prostate Cancer ⋮ Estimating prediction error in microarray classification: Modifications of the 0.632+ bootstrap when ${\bf n} < {\bf p}$ ⋮ Modified check loss for efficient estimation via model selection in quantile regression ⋮ Technical Efficiency and Spatial Econometric Model: Application to Rice Production of Thailand ⋮ Probability estimation and machine learning-Editorial ⋮ Machine learning versus statistical modeling
This page was built for publication: Improvements on Cross-Validation: The .632+ Bootstrap Method