Near-ideal model selection by \(\ell _{1}\) minimization
From MaRDI portal
Publication:834335
DOI10.1214/08-AOS653zbMath1173.62053arXiv0801.0345MaRDI QIDQ834335
Emmanuel J. Candès, Yaniv Plan
Publication date: 19 August 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.0345
model selection; eigenvalues of random matrices; oracle inequalities; compressed sensing; lasso; incoherence
62G08: Nonparametric regression and quantile regression
62H12: Estimation in multivariate analysis
62J05: Linear regression; mixed models
62G05: Nonparametric estimation
90C90: Applications of mathematical programming
90C20: Quadratic programming
94A12: Signal theory (characterization, reconstruction, filtering, etc.)
Related Items
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Invertibility of random submatrices via tail-decoupling and a matrix Chernoff inequality, UPS delivers optimal phase diagram in high-dimensional variable selection, Reconstructing DNA copy number by penalized estimation and imputation, Compressed sensing with coherent and redundant dictionaries, \(\ell_{1}\)-penalization for mixture regression models, Adaptive Dantzig density estimation, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, Analysis of sparse MIMO radar, High-dimensional Gaussian model selection on a Gaussian design, Space alternating penalized Kullback proximal point algorithms for maximizing likelihood with nondifferentiable penalty, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Compressed sensing and matrix completion with constant proportion of corruptions, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, The Lasso problem and uniqueness, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Pivotal estimation via square-root lasso in nonparametric regression, Estimation and variable selection with exponential weights, Concentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applications, Consistency of \(\ell_1\) recovery from noisy deterministic measurements
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Lasso-type recovery of sparse representations for high-dimensional data
- Estimating the dimension of a model
- Risk bounds for model selection via penalization
- Neural networks: A review from a statistical perspective. With comments and a rejoinder by the authors
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- The risk inflation criterion for multiple regression
- Norms of random submatrices and sparse approximation
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Quantitative robust uncertainty principles and optimally sparse decompositions
- Stable recovery of sparse overcomplete representations in the presence of noise
- Linear Inversion of Band-Limited Reflection Seismograms
- Atomic Decomposition by Basis Pursuit
- New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities
- Sparse Approximate Solutions to Linear Systems
- Some Comments on C P
- Gaussian model selection
- A new look at the statistical model identification