Aggregated hold out for sparse linear regression with a robust loss function
From MaRDI portal
Abstract: Sparse linear regression methods generally have a free hyperparameter which controls the amount of sparsity, and is subject to a bias-variance tradeoff. This article considers the use of Aggregated hold-out to aggregate over values of this hyperparameter, in the context of linear regression with the Huber loss function. Aggregated hold-out (Agghoo) is a procedure which averages estimators selected by hold-out (cross-validation with a single split). In the theoretical part of the article, it is proved that Agghoo satisfies a non-asymptotic oracle inequality when it is applied to sparse estimators which are parametrized by their zero-norm. In particular , this includes a variant of the Lasso introduced by Zou, Hasti{'e} and Tibshirani. Simulations are used to compare Agghoo with cross-validation. They show that Agghoo performs better than CV when the intrinsic dimension is high and when there are confounders correlated with the predictive covariates.
Recommendations
Cites work
- scientific article; zbMATH DE number 477682 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 7370537 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- A survey of cross-validation procedures for model selection
- Asymptotic Analysis of Robust LASSOs in the Presence of Noise With Large Variance
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Degrees of freedom in lasso problems
- Exponential screening and optimal rates of sparse estimation
- Learning without concentration
- Learning without concentration for general loss functions
- Least angle regression. (With discussion)
- Minimum contrast estimators on sieves: Exponential bounds and rates of convergence
- Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles
- Model selection in nonparametric regression
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- On cross-validated Lasso in high dimensions
- On the ``degrees of freedom of the lasso
- Optimal global rates of convergence for nonparametric regression
- Oracle inequalities for cross-validation type procedures
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Piecewise linear regularized solution paths
- Random lasso
- Risk consistency of cross-validation with Lasso-type procedures
- Robust Estimation of a Location Parameter
- Robust Statistics
- Robust linear least squares regression
- Robust regression through the Huber's criterion and adaptive lasso penalty
- Robust statistical learning with Lipschitz and convex loss functions
- Slope heuristics and V-fold model selection in heteroscedastic regression using strongly localized bases
- Stability Selection
- The Lasso, correlated design, and improved oracle inequalities
- The cross-validated adaptive epsilon-net estimator
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- The sparse Fourier transform: theory and practice
- Unified LASSO Estimation by Least Squares Approximation
This page was built for publication: Aggregated hold out for sparse linear regression with a robust loss function
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2136632)