Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using \ell _{1}-Constrained Quadratic Programming (Lasso)
From MaRDI portal
Publication:4975847
DOI10.1109/TIT.2009.2016018zbMATH Open1367.62220MaRDI QIDQ4975847FDOQ4975847
Authors: Martin J. Wainwright
Publication date: 8 August 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Quadratic programming (90C20) Ridge regression; shrinkage estimators (Lasso) (62J07) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Cited In (only showing first 100 items - show all)
- Adjusting for high-dimensional covariates in sparse precision matrix estimation by \(\ell_1\)-penalization
- Model selection for high-dimensional quadratic regression via regularization
- Estimating time-varying networks
- The adaptive Lasso in high-dimensional sparse heteroscedastic models
- Detection boundary in sparse regression
- On the conditions used to prove oracle results for the Lasso
- Least squares after model selection in high-dimensional sparse models
- Sign-constrained least squares estimation for high-dimensional regression
- Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- RIPless compressed sensing from anisotropic measurements
- Sparse directed acyclic graphs incorporating the covariates
- On semidefinite relaxations for the block model
- Discussion: Latent variable graphical model selection via convex optimization
- A significance test for the lasso
- Bootstrap inference for network construction with an application to a breast cancer microarray study
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Sparse semiparametric discriminant analysis
- Rejoinder: Latent variable graphical model selection via convex optimization
- High-dimensional change-point estimation: combining filtering with convex optimization
- The Lasso problem and uniqueness
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- On constrained and regularized high-dimensional regression
- Sparse classification: a scalable discrete optimization perspective
- Discussion: ``A significance test for the lasso
- Minimax-optimal nonparametric regression in high dimensions
- High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking
- Discussion: Latent variable graphical model selection via convex optimization
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Estimation and variable selection with exponential weights
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Improved variable selection with forward-lasso adaptive shrinkage
- On model selection consistency of regularized M-estimators
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Two are better than one: fundamental parameters of frame coherence
- Nonnegative-Lasso and application in index tracking
- A discussion on practical considerations with sparse regression methodologies
- A new perspective on least squares under convex constraint
- On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property
- AIC for the Lasso in generalized linear models
- A numerical exploration of compressed sampling recovery
- A posterior probability approach for gene regulatory network inference in genetic perturbation data
- Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error
- Factor-Adjusted Regularized Model Selection
- Nonnegative elastic net and application in index tracking
- Rejoinder: ``A significance test for the lasso
- Stability Selection
- Self-concordant analysis for logistic regression
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Best subset selection via a modern optimization lens
- Bayesian hyper-Lassos with non-convex penalization
- Statistical significance in high-dimensional linear models
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- An algorithm for quadratic \(\ell_1\)-regularized optimization with a flexible active-set strategy
- The log-linear group-lasso estimator and its asymptotic properties
- Latent variable graphical model selection via convex optimization
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Discussion: ``A significance test for the lasso
- Pivotal estimation via square-root lasso in nonparametric regression
- \(\ell_{1}\)-penalization for mixture regression models
- Autoregressive process modeling via the Lasso procedure
- Worst possible sub-directions in high-dimensional models
- Stability
- Estimation of high-dimensional graphical models using regularized score matching
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Structured sparsity through convex optimization
- Support union recovery in high-dimensional multivariate regression
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- A distribution-based Lasso for a general single-index model
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- On the use of the Lasso for instrumental variables estimation with some invalid instruments
- Oracle inequalities and optimal inference under group sparsity
- Semi-analytic resampling in Lasso
- Sparsistency and agnostic inference in sparse PCA
- A bootstrap Lasso+partial ridge method to construct confidence intervals for parameters in high-dimensional sparse linear models
- Principled sure independence screening for Cox models with ultra-high-dimensional covariates
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Conditional score matching for high-dimensional partial graphical models
- Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression
- Optimal variable selection in multi-group sparse discriminant analysis
- Recovery of partly sparse and dense signals
- Sharp recovery bounds for convex demixing, with applications
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Discussion: ``A significance test for the lasso
- Learning high-dimensional Gaussian linear structural equation models with heterogeneous error variances
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Penalized logspline density estimation using total variation penalty
- Sparse regression: scalable algorithms and empirical performance
- Sorted concave penalized regression
- Bayesian augmented Lagrangian algorithm for system identification
- Multi-stage convex relaxation for feature selection
- A general family of trimmed estimators for robust high-dimensional data analysis
- Adaptive log-density estimation
- The generalized Lasso problem and uniqueness
- Model selection consistency of Lasso for empirical data
This page was built for publication: Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4975847)