The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
From MaRDI portal
Publication:2054498
Abstract: The Lasso is a popular regression method for high-dimensional problems in which the number of parameters , is larger than the number of samples: . A useful heuristics relates the statistical properties of the Lasso estimator to that of a simple soft-thresholding denoiser,in a denoising problem in which the parameters are observed in Gaussian noise, with a carefully tuned variance. Earlier work confirmed this picture in the limit , pointwise in the parameters , and in the value of the regularization parameter. Here, we consider a standard random design model and prove exponential concentration of its empirical distribution around the prediction provided by the Gaussian denoising model. Crucially, our results are uniform with respect to belonging to balls, , and with respect to the regularization parameter. This allows to derive sharp results for the performances of various data-driven procedures to tune the regularization. Our proofs make use of Gaussian comparison inequalities, and in particular of a version of Gordon's minimax theorem developed by Thrampoulidis, Oymak, and Hassibi, which controls the optimum value of the Lasso optimization problem. Crucially, we prove a stability property of the minimizer in Wasserstein distance, that allows to characterize properties of the minimizer itself.
Recommendations
- On the distribution of the adaptive LASSO estimator
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- On the sensitivity of the Lasso to the number of predictor variables
- On the distribution, model selection properties and uniqueness of the Lasso estimator in low and high dimensions
Cites work
- scientific article; zbMATH DE number 4061904 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A modern maximum-likelihood theory for high-dimensional logistic regression
- A statistical mechanics approach to de-biasing and uncertainty estimation in Lasso for random measurements
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Accuracy assessment for high-dimensional linear regression
- Compressive Phase Retrieval via Generalized Approximate Message Passing
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Consistent parameter estimation for Lasso and approximate message passing
- Convex Recovery of a Structured Signal from Independent Random Linear Measurements
- Decoding by Linear Programming
- Degrees of freedom in lasso problems
- Estimation of the mean of a multivariate normal distribution
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Living on the edge: phase transitions in convex programs with random data
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation
- Neighborliness of randomly projected simplices in high dimensions
- Non-Negative Principal Component Analysis: Message Passing Algorithms and Sharp Asymptotics
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On cross-validated Lasso in high dimensions
- On the conditions used to prove oracle results for the Lasso
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- Optimal Transport
- Optimal errors and phase transitions in high-dimensional generalized linear models
- Optimization-Based AMP for Phase Retrieval: The Impact of Initialization and $\ell_{2}$ Regularization
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Simultaneous analysis of Lasso and Dantzig selector
- Some inequalities for Gaussian processes and applications
- State evolution for approximate message passing with non-separable functions
- Statistics for high-dimensional data. Methods, theory and applications.
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- The Estimation of Prediction Error
- The LASSO Risk for Gaussian Matrices
- The Noise-Sensitivity Phase Transition in Compressed Sensing
Cited in
(20)- Consistent parameter estimation for Lasso and approximate message passing
- A Unifying Tutorial on Approximate Message Passing
- Noisy linear inverse problems under convex constraints: exact risk asymptotics in high dimensions
- Fundamental barriers to high-dimensional regression with convex penalties
- The Lasso with general Gaussian designs with applications to hypothesis testing
- Precise statistical analysis of classification accuracies for adversarial training
- Aggregated hold out for sparse linear regression with a robust loss function
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- Surprises in high-dimensional ridgeless least squares interpolation
- Sensitivity of \(\ell_1\) minimization to parameter choice
- Asymptotic normality of robust \(M\)-estimators with convex penalty
- Optimal linear discriminators for the discrete choice model in growing dimensions
- Noise covariance estimation in multi-task high-dimensional linear models
- Local convexity of the TAP free energy and AMP convergence for \(\mathbb{Z}_2\)-synchronization
- Generic error bounds for the generalized Lasso with sub-exponential data
- Consistent parameter estimation for Lasso and approximate message passing
- Universality of regularized regression estimators in high dimensions
- Debiasing convex regularized estimators and interval estimation in linear models
- Sharp global convergence guarantees for iterative nonconvex optimization with random data
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
This page was built for publication: The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2054498)