Near-ideal model selection by \(\ell _{1}\) minimization
From MaRDI portal
Publication:834335
DOI10.1214/08-AOS653zbMath1173.62053arXiv0801.0345MaRDI QIDQ834335
Emmanuel J. Candès, Yaniv Plan
Publication date: 19 August 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.0345
model selection; eigenvalues of random matrices; oracle inequalities; compressed sensing; lasso; incoherence
62G08: Nonparametric regression and quantile regression
62H12: Estimation in multivariate analysis
62J05: Linear regression; mixed models
62G05: Nonparametric estimation
90C90: Applications of mathematical programming
90C20: Quadratic programming
94A12: Signal theory (characterization, reconstruction, filtering, etc.)
Related Items
Unnamed Item, Sparse signal recovery via non-convex optimization and overcomplete dictionaries, Adapting to unknown noise level in sparse deconvolution, Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices, Chaotic Binary Sensing Matrices, A general theory of concave regularization for high-dimensional sparse estimation problems, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Best subset selection via a modern optimization lens, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Deterministic convolutional compressed sensing matrices, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Invertibility of random submatrices via tail-decoupling and a matrix Chernoff inequality, UPS delivers optimal phase diagram in high-dimensional variable selection, Covariate assisted screening and estimation, Normalized and standard Dantzig estimators: two approaches, Reconstructing DNA copy number by penalized estimation and imputation, Compressed sensing with coherent and redundant dictionaries, \(\ell_{1}\)-penalization for mixture regression models, Adaptive Dantzig density estimation, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, Analysis of sparse MIMO radar, Controlling the false discovery rate via knockoffs, Optimal dual certificates for noise robustness bounds in compressive sensing, High-dimensional Gaussian model selection on a Gaussian design, LOL selection in high dimension, On the sensitivity of the Lasso to the number of predictor variables, Space alternating penalized Kullback proximal point algorithms for maximizing likelihood with nondifferentiable penalty, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Compressed sensing and matrix completion with constant proportion of corruptions, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, The Lasso problem and uniqueness, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The generalized Lasso problem and uniqueness, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Pivotal estimation via square-root lasso in nonparametric regression, A global homogeneity test for high-dimensional linear regression, Prediction error bounds for linear regression with the TREX, Sharp oracle inequalities for low-complexity priors, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Refined analysis of sparse MIMO radar, High-dimensional change-point estimation: combining filtering with convex optimization, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, The degrees of freedom of partly smooth regularizers, Estimation and variable selection with exponential weights, Concentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applications, Consistency of \(\ell_1\) recovery from noisy deterministic measurements, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Adventures in Compressive Sensing Based MIMO Radar, Discrete A Priori Bounds for the Detection of Corrupted PDE Solutions in Exascale Computations, A NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMING
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Lasso-type recovery of sparse representations for high-dimensional data
- Estimating the dimension of a model
- Risk bounds for model selection via penalization
- Neural networks: A review from a statistical perspective. With comments and a rejoinder by the authors
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- The risk inflation criterion for multiple regression
- Norms of random submatrices and sparse approximation
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Quantitative robust uncertainty principles and optimally sparse decompositions
- Stable recovery of sparse overcomplete representations in the presence of noise
- Linear Inversion of Band-Limited Reflection Seismograms
- Atomic Decomposition by Basis Pursuit
- New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities
- Sparse Approximate Solutions to Linear Systems
- Some Comments on C P
- Gaussian model selection
- A new look at the statistical model identification