Near-ideal model selection by \(\ell _{1}\) minimization

From MaRDI portal
Publication:834335


DOI10.1214/08-AOS653zbMath1173.62053arXiv0801.0345MaRDI QIDQ834335

Emmanuel J. Candès, Yaniv Plan

Publication date: 19 August 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0801.0345


62G08: Nonparametric regression and quantile regression

62H12: Estimation in multivariate analysis

62J05: Linear regression; mixed models

62G05: Nonparametric estimation

90C90: Applications of mathematical programming

90C20: Quadratic programming

94A12: Signal theory (characterization, reconstruction, filtering, etc.)


Related Items

Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Invertibility of random submatrices via tail-decoupling and a matrix Chernoff inequality, UPS delivers optimal phase diagram in high-dimensional variable selection, Reconstructing DNA copy number by penalized estimation and imputation, Compressed sensing with coherent and redundant dictionaries, \(\ell_{1}\)-penalization for mixture regression models, Adaptive Dantzig density estimation, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, Analysis of sparse MIMO radar, High-dimensional Gaussian model selection on a Gaussian design, Space alternating penalized Kullback proximal point algorithms for maximizing likelihood with nondifferentiable penalty, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Compressed sensing and matrix completion with constant proportion of corruptions, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, The Lasso problem and uniqueness, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Pivotal estimation via square-root lasso in nonparametric regression, Estimation and variable selection with exponential weights, Concentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applications, Consistency of \(\ell_1\) recovery from noisy deterministic measurements


Uses Software


Cites Work