Near-ideal model selection by \(\ell _{1}\) minimization
From MaRDI portal
Publication:834335
DOI10.1214/08-AOS653zbMath1173.62053arXiv0801.0345OpenAlexW3101762025MaRDI QIDQ834335
Emmanuel J. Candès, Yaniv Plan
Publication date: 19 August 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.0345
Nonparametric regression and quantile regression (62G08) Estimation in multivariate analysis (62H12) Linear regression; mixed models (62J05) Nonparametric estimation (62G05) Applications of mathematical programming (90C90) Quadratic programming (90C20) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items
Refined analysis of sparse MIMO radar, LOL selection in high dimension, Best subset selection via a modern optimization lens, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Deterministic convolutional compressed sensing matrices, Sparse signal recovery via non-convex optimization and overcomplete dictionaries, High-dimensional change-point estimation: combining filtering with convex optimization, Inadequacy of linear methods for minimal sensor placement and feature selection in nonlinear systems: a new approach using secants, Error bounds for compressed sensing algorithms with group sparsity: A unified approach, The degrees of freedom of partly smooth regularizers, Space alternating penalized Kullback proximal point algorithms for maximizing likelihood with nondifferentiable penalty, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Controlling the false discovery rate via knockoffs, Adventures in Compressive Sensing Based MIMO Radar, \(\ell_{1}\)-penalization for mixture regression models, Optimal dual certificates for noise robustness bounds in compressive sensing, Discrete A Priori Bounds for the Detection of Corrupted PDE Solutions in Exascale Computations, Controlling False Discovery Rate Using Gaussian Mirrors, A resampling approach for confidence intervals in linear time-series models after model selection, Compressed sensing and matrix completion with constant proportion of corruptions, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Decomposition of dynamical signals into jumps, oscillatory patterns, and possible outliers, Adaptive Dantzig density estimation, Adaptive decomposition-based evolutionary approach for multiobjective sparse reconstruction, Statistical analysis of sparse approximate factor models, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Two are better than one: fundamental parameters of frame coherence, Adapting to unknown noise level in sparse deconvolution, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, The Lasso problem and uniqueness, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, On the conditions used to prove oracle results for the Lasso, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Invertibility of random submatrices via tail-decoupling and a matrix Chernoff inequality, UPS delivers optimal phase diagram in high-dimensional variable selection, Estimation and variable selection with exponential weights, Concentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applications, Consistency of \(\ell_1\) recovery from noisy deterministic measurements, An Introduction to Compressed Sensing, Covariate assisted screening and estimation, A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization, Normalized and standard Dantzig estimators: two approaches, A significance test for the lasso, Discussion: ``A significance test for the lasso, Rejoinder: ``A significance test for the lasso, Pivotal estimation via square-root lasso in nonparametric regression, The generalized Lasso problem and uniqueness, Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices, Reconstructing DNA copy number by penalized estimation and imputation, Compressed sensing with coherent and redundant dictionaries, A global homogeneity test for high-dimensional linear regression, Prediction error bounds for linear regression with the TREX, High-dimensional Gaussian model selection on a Gaussian design, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Chaotic Binary Sensing Matrices, On the sensitivity of the Lasso to the number of predictor variables, A general theory of concave regularization for high-dimensional sparse estimation problems, Analysis of sparse MIMO radar, Understanding large text corpora via sparse machine learning, Sharp oracle inequalities for low-complexity priors, l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies, Sampling from non-smooth distributions through Langevin diffusion, Unnamed Item, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Group sparse optimization for learning predictive state representations, A NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMING, Submatrices with NonUniformly Selected Random Supports and Insights into Sparse Approximation, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso, Discussion: ``A significance test for the lasso
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Lasso-type recovery of sparse representations for high-dimensional data
- Estimating the dimension of a model
- Risk bounds for model selection via penalization
- Neural networks: A review from a statistical perspective. With comments and a rejoinder by the authors
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- The risk inflation criterion for multiple regression
- Norms of random submatrices and sparse approximation
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Quantitative robust uncertainty principles and optimally sparse decompositions
- Stable recovery of sparse overcomplete representations in the presence of noise
- Linear Inversion of Band-Limited Reflection Seismograms
- Atomic Decomposition by Basis Pursuit
- New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities
- Sparse Approximate Solutions to Linear Systems
- Some Comments on C P
- Gaussian model selection
- A new look at the statistical model identification