On the prediction performance of the Lasso
From MaRDI portal
Publication:502891
DOI10.3150/15-BEJ756zbMATH Open1359.62295arXiv1402.1700MaRDI QIDQ502891FDOQ502891
Authors: Arnak S. Dalalyan, Mohamed Hebiri, Johannes Lederer
Publication date: 11 January 2017
Published in: Bernoulli (Search for Journal in Brave)
Abstract: Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter can lead to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for moderately correlated covariates, the prediction performance of the Lasso can be mediocre irrespective of the choice of the tuning parameter. We finally show that our results also lead to near-optimal rates for the least-squares estimator with total variation penalty.
Full work available at URL: https://arxiv.org/abs/1402.1700
Recommendations
Cited In (59)
- Estimation of Linear Functionals in High-Dimensional Linear Models: From Sparsity to Nonsparsity
- Statistical guarantees for sparse deep learning
- Element-wise estimation error of generalized Fused Lasso
- Integrating additional knowledge into the estimation of graphical models
- Multivariate trend filtering for lattice data
- Empirical priors and posterior concentration in a piecewise polynomial sequence model
- The Lasso with structured design and entropy of (absolute) convex hulls
- Frame-constrained total variation regularization for white noise regression
- Reconstruction of jointly sparse vectors via manifold optimization
- Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm
- Solution of linear ill-posed problems using overcomplete dictionaries
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Corrected proof of the result of 'A prediction error property of the Lasso estimator and its generalization' by Huang (2003)
- Adapting to unknown noise level in sparse deconvolution
- Tensor denoising with trend filtering
- Title not available (Why is that?)
- Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees
- Adaptive risk bounds in univariate total variation denoising and trend filtering
- On the prediction loss of the Lasso in the partially labeled setting
- Stabilizing the Lasso against cross-validation variability
- Canonical thresholding for nonsparse high-dimensional linear regression
- Inference for high-dimensional instrumental variables regression
- Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy-Krause variation
- Title not available (Why is that?)
- Oracle inequalities for high-dimensional prediction
- Slope meets Lasso: improved oracle bounds and optimality
- Sharp oracle inequalities for low-complexity priors
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Solution of linear ill-posed problems using random dictionaries
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- High-dimensional latent panel quantile regression with an application to asset pricing
- Lasso–type and Heuristic Strategies in Model Selection and Forecasting
- Estimating piecewise monotone signals
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- Prediction error bounds for linear regression with the TREX
- Tuning parameter calibration for \(\ell_1\)-regularized logistic regression
- Ridge regression revisited: debiasing, thresholding and bootstrap
- On the robustness of the generalized fused Lasso to prior specifications
- On Lasso refitting strategies
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator
- Statistical guarantees for regularized neural networks
- The DFS Fused Lasso: Linear-Time Denoising over General Graphs
- Ensemble Subset Regression (ENSURE): Efficient High-dimensional Prediction
- Exact Spike Train Inference Via $\ell_0$ Optimization
- Prediction bounds for higher order total variation regularized least squares
- On the exponentially weighted aggregate with the Laplace prior
- Sampling rates for \(\ell^1\)-synthesis
- Removing the singularity of a penalty via thresholding function matching
- On tight bounds for the Lasso
- Title not available (Why is that?)
- Augmented direct learning for conditional average treatment effect estimation with double robustness
- Variable selection under multicollinearity using modified log penalty
- Tuning parameter calibration for personalized prediction in medicine
- Penalized B-spline estimator for regression functions using total variation penalty
- Cross-Validation With Confidence
- On the total variation regularized estimator over a class of tree graphs
- Strong Rules for Discarding Predictors in Lasso-Type Problems
- Logistic regression with total variation regularization
- Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs
This page was built for publication: On the prediction performance of the Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q502891)