Abstract: Although the Lasso has been extensively studied, the relationship between its prediction performance and the correlations of the covariates is not fully understood. In this paper, we give new insights into this relationship in the context of multiple linear regression. We show, in particular, that the incorporation of a simple correlation measure into the tuning parameter can lead to a nearly optimal prediction performance of the Lasso even for highly correlated covariates. However, we also reveal that for moderately correlated covariates, the prediction performance of the Lasso can be mediocre irrespective of the choice of the tuning parameter. We finally show that our results also lead to near-optimal rates for the least-squares estimator with total variation penalty.
Recommendations
Cited in
(63)- Multivariate trend filtering for lattice data
- Empirical priors and posterior concentration in a piecewise polynomial sequence model
- Statistical guarantees for sparse deep learning
- Integrating additional knowledge into the estimation of graphical models
- Estimation of Linear Functionals in High-Dimensional Linear Models: From Sparsity to Nonsparsity
- Element-wise estimation error of generalized Fused Lasso
- The Lasso with structured design and entropy of (absolute) convex hulls
- High-dimensional latent panel quantile regression with an application to asset pricing
- Ensemble Subset Regression (ENSURE): Efficient High-dimensional Prediction
- On the exponentially weighted aggregate with the Laplace prior
- Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm
- Slope meets Lasso: improved oracle bounds and optimality
- Strong Rules for Discarding Predictors in Lasso-Type Problems
- Improving the prediction performance of the Lasso by subtracting the additive structural noises
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator
- On the prediction loss of the Lasso in the partially labeled setting
- Stabilizing the Lasso against cross-validation variability
- Sampling rates for \(\ell^1\)-synthesis
- The DFS fused Lasso: linear-time denoising over general graphs
- Canonical thresholding for nonsparse high-dimensional linear regression
- Binarsity: a penalization for one-hot encoded features in linear supervised learning
- Solution of linear ill-posed problems using overcomplete dictionaries
- Prediction error bounds for linear regression with the TREX
- Adapting to unknown noise level in sparse deconvolution
- Frame-constrained total variation regularization for white noise regression
- Prediction bounds for higher order total variation regularized least squares
- Removing the singularity of a penalty via thresholding function matching
- Reconstruction of jointly sparse vectors via manifold optimization
- Logistic regression with total variation regularization
- Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy-Krause variation
- Optimal two-step prediction in regression
- Sharp oracle inequalities for low-complexity priors
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Cross-validation with confidence
- Tensor denoising with trend filtering
- scientific article; zbMATH DE number 7306926 (Why is no real title available?)
- Exact spike train inference via \(\ell_{0}\) optimization
- Augmented direct learning for conditional average treatment effect estimation with double robustness
- Orthogonal one step greedy procedure for heteroscedastic linear models
- Tuning parameter calibration for \(\ell_1\)-regularized logistic regression
- Solution of linear ill-posed problems using random dictionaries
- On the total variation regularized estimator over a class of tree graphs
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Tuning parameter calibration for personalized prediction in medicine
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Lasso–type and Heuristic Strategies in Model Selection and Forecasting
- On tight bounds for the Lasso
- Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs
- Statistical guarantees for regularized neural networks
- Estimating piecewise monotone signals
- On the sensitivity of the Lasso to the number of predictor variables
- Inference for high-dimensional instrumental variables regression
- Penalized B-spline estimator for regression functions using total variation penalty
- Corrected proof of the result of 'A prediction error property of the Lasso estimator and its generalization' by Huang (2003)
- scientific article; zbMATH DE number 7626791 (Why is no real title available?)
- Adaptive estimation of multivariate piecewise polynomials and bounded variation functions by optimal decision trees
- On the robustness of the generalized fused Lasso to prior specifications
- Ridge regression revisited: debiasing, thresholding and bootstrap
- Variable selection under multicollinearity using modified log penalty
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- On Lasso refitting strategies
- Adaptive risk bounds in univariate total variation denoising and trend filtering
- Oracle inequalities for high-dimensional prediction
This page was built for publication: On the prediction performance of the Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q502891)