Model selection by resampling penalization
From MaRDI portal
Publication:1951992
DOI10.1214/08-EJS196zbMath1326.62097arXiv0906.3124OpenAlexW2046044912MaRDI QIDQ1951992
Publication date: 27 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0906.3124
penalizationmodel selectionresamplingadaptivitynonparametric regressionnonparametric statisticsheteroscedastic dataregressogramexchangeable weighted bootstraphistogram selection
Nonparametric regression and quantile regression (62G08) Nonparametric statistical resampling methods (62G09)
Related Items
Some nonasymptotic results on resampling in high dimension. I: Confidence regions ⋮ Adaptive estimation for an inverse regression model with unknown operator ⋮ Model selection in linear regression using paired bootstrap ⋮ Adaptive partitioning schemes for bipartite ranking ⋮ Model selection: from theory to practice ⋮ Segmentation of the mean of heteroscedastic data via cross-validation ⋮ Spatial adaptation in heteroscedastic regression: propagation approach ⋮ Optimal model selection in heteroscedastic regression using piecewise polynomial functions ⋮ Model selection by resampling penalization ⋮ Optimal model selection in density estimation ⋮ Adaptive density estimation of stationary \(\beta\)-mixing and \(\tau\)-mixing processes ⋮ Estimator selection with respect to Hellinger-type risks ⋮ Margin-adaptive model selection in statistical learning ⋮ Second order correctness of perturbation bootstrap M-estimator of multiple linear regression parameter ⋮ Optimal model selection for density estimation of stationary data under various mixing condi\-tions ⋮ Cytometry inference through adaptive atomic deconvolution ⋮ A survey of cross-validation procedures for model selection ⋮ Estimator selection in the Gaussian setting ⋮ High-dimensional Gaussian model selection on a Gaussian design ⋮ Slope heuristics: overview and implementation ⋮ Theoretical analysis of cross-validation for estimating the risk of the k-Nearest Neighbor classifier ⋮ Estimation and model selection for model-based clustering with the conditional classification likelihood
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimax theory of image reconstruction
- Some nonasymptotic results on resampling in high dimension. I: Confidence regions
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Asymptotic optimality for \(C_ p\), \(C_ L\), cross-validation and generalized cross-validation: Discrete index set
- Optimal rates of convergence for nonparametric estimators
- When does bootstrap work! Asymptotic results and simulations
- A rank statistics approach to the consistency of a general bootstrap
- Bootstrap methods: another look at the jackknife
- Risk bounds for model selection via penalization
- Bootstrapping log likelihood and EIC, an extension of AIC
- Weak convergence of dependent empirical measures with application to subsampling in function spaces
- Exchangeably weighted bootstraps of the general empirical process
- Consistency of the generalized bootstrap for degenerate \(U\)-statistics
- The weighted bootstrap
- Approximating the negative moments of the Poisson distribution.
- Information-theoretic determination of minimax rates of convergence
- Smooth discrimination analysis
- Model selection for regression on a fixed design
- Estimation of equifrequency histograms
- Jackknife, bootstrap and other resampling methods in regression analysis
- Complexity regularization via localized random penalties
- On general resampling algorithms and their performance in distribution estimation
- Weak convergence and empirical processes. With applications to statistics
- Simultaneous estimation of the mean and the variance in heteroscedastic Gaussian regression
- Model selection by resampling penalization
- Minimal penalties for Gaussian model selection
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Consistency of cross validation for comparing regression procedures
- Adaptive asymptotically efficient estimation in heteroscedastic nonparametric regression
- Statistical predictor identification
- Concentration of the hypergeometric distribution
- Local Rademacher complexities
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Bootstrap Model Selection
- Model selection for regression on a random design
- Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation
- How Biased is the Apparent Error Rate of a Prediction Rule?
- A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods
- An optimal selection of regression variables
- The Predictive Sample Reuse Method with Applications
- Bounds on Negative Moments
- Improvements on Cross-Validation: The .632+ Bootstrap Method
- Testing the Fit of a Parametric Function
- Rademacher penalties and structural risk minimization
- Learning Theory
- Learning Theory
- The Relationship between Variable Selection and Data Agumentation and a Method for Prediction
- Some Comments on C P
- Histogram selection in non Gaussian regression
- The bootstrap and Edgeworth expansion
- Combinatorial methods in density estimation
- Gaussian model selection
- Model selection and error estimation