The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
From MaRDI portal
Publication:2054498
DOI10.1214/20-AOS2038zbMATH Open1480.62145arXiv1811.01212OpenAlexW3202433451MaRDI QIDQ2054498FDOQ2054498
Authors: Yanyan Li
Publication date: 3 December 2021
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: The Lasso is a popular regression method for high-dimensional problems in which the number of parameters , is larger than the number of samples: . A useful heuristics relates the statistical properties of the Lasso estimator to that of a simple soft-thresholding denoiser,in a denoising problem in which the parameters are observed in Gaussian noise, with a carefully tuned variance. Earlier work confirmed this picture in the limit , pointwise in the parameters , and in the value of the regularization parameter. Here, we consider a standard random design model and prove exponential concentration of its empirical distribution around the prediction provided by the Gaussian denoising model. Crucially, our results are uniform with respect to belonging to balls, , and with respect to the regularization parameter. This allows to derive sharp results for the performances of various data-driven procedures to tune the regularization. Our proofs make use of Gaussian comparison inequalities, and in particular of a version of Gordon's minimax theorem developed by Thrampoulidis, Oymak, and Hassibi, which controls the optimum value of the Lasso optimization problem. Crucially, we prove a stability property of the minimizer in Wasserstein distance, that allows to characterize properties of the minimizer itself.
Full work available at URL: https://arxiv.org/abs/1811.01212
Recommendations
- On the distribution of the adaptive LASSO estimator
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- On the sensitivity of the Lasso to the number of predictor variables
- On the distribution, model selection properties and uniqueness of the Lasso estimator in low and high dimensions
Cites Work
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Estimation of the mean of a multivariate normal distribution
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- A statistical mechanics approach to de-biasing and uncertainty estimation in Lasso for random measurements
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Degrees of freedom in lasso problems
- Optimal Transport
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Decoding by Linear Programming
- Neighborliness of randomly projected simplices in high dimensions
- The Estimation of Prediction Error
- Some inequalities for Gaussian processes and applications
- High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension
- Title not available (Why is that?)
- Living on the edge: phase transitions in convex programs with random data
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- The LASSO Risk for Gaussian Matrices
- The Noise-Sensitivity Phase Transition in Compressed Sensing
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- Convex Recovery of a Structured Signal from Independent Random Linear Measurements
- Compressive Phase Retrieval via Generalized Approximate Message Passing
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation
- On cross-validated Lasso in high dimensions
- Accuracy assessment for high-dimensional linear regression
- Non-Negative Principal Component Analysis: Message Passing Algorithms and Sharp Asymptotics
- Consistent parameter estimation for Lasso and approximate message passing
- A modern maximum-likelihood theory for high-dimensional logistic regression
- State evolution for approximate message passing with non-separable functions
- Optimal errors and phase transitions in high-dimensional generalized linear models
- Optimization-Based AMP for Phase Retrieval: The Impact of Initialization and $\ell_{2}$ Regularization
Cited In (20)
- Asymptotic normality of robust \(M\)-estimators with convex penalty
- Noise covariance estimation in multi-task high-dimensional linear models
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- A Unifying Tutorial on Approximate Message Passing
- Optimal linear discriminators for the discrete choice model in growing dimensions
- Sharp global convergence guarantees for iterative nonconvex optimization with random data
- Aggregated hold out for sparse linear regression with a robust loss function
- Universality of regularized regression estimators in high dimensions
- Noisy linear inverse problems under convex constraints: exact risk asymptotics in high dimensions
- Precise statistical analysis of classification accuracies for adversarial training
- Local convexity of the TAP free energy and AMP convergence for \(\mathbb{Z}_2\)-synchronization
- Generic error bounds for the generalized Lasso with sub-exponential data
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Debiasing convex regularized estimators and interval estimation in linear models
- Consistent parameter estimation for Lasso and approximate message passing
- Consistent parameter estimation for Lasso and approximate message passing
- Surprises in high-dimensional ridgeless least squares interpolation
- Sensitivity of \(\ell_1\) minimization to parameter choice
- The Lasso with general Gaussian designs with applications to hypothesis testing
- Fundamental barriers to high-dimensional regression with convex penalties
This page was built for publication: The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2054498)