Near-ideal model selection by \(\ell _{1}\) minimization

From MaRDI portal
Revision as of 13:48, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:834335

DOI10.1214/08-AOS653zbMath1173.62053arXiv0801.0345OpenAlexW3101762025MaRDI QIDQ834335

Emmanuel J. Candès, Yaniv Plan

Publication date: 19 August 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0801.0345




Related Items (75)

Refined analysis of sparse MIMO radarLOL selection in high dimensionBest subset selection via a modern optimization lensSLOPE is adaptive to unknown sparsity and asymptotically minimaxConjugate gradient acceleration of iteratively re-weighted least squares methodsDeterministic convolutional compressed sensing matricesSparse signal recovery via non-convex optimization and overcomplete dictionariesHigh-dimensional change-point estimation: combining filtering with convex optimizationInadequacy of linear methods for minimal sensor placement and feature selection in nonlinear systems: a new approach using secantsError bounds for compressed sensing algorithms with group sparsity: A unified approachThe degrees of freedom of partly smooth regularizersSpace alternating penalized Kullback proximal point algorithms for maximizing likelihood with nondifferentiable penaltyNon-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalitiesControlling the false discovery rate via knockoffsAdventures in Compressive Sensing Based MIMO Radar\(\ell_{1}\)-penalization for mixture regression modelsOptimal dual certificates for noise robustness bounds in compressive sensingDiscrete A Priori Bounds for the Detection of Corrupted PDE Solutions in Exascale ComputationsControlling False Discovery Rate Using Gaussian MirrorsA resampling approach for confidence intervals in linear time-series models after model selectionCompressed sensing and matrix completion with constant proportion of corruptionsPhase transition in limiting distributions of coherence of high-dimensional random matricesDecomposition of dynamical signals into jumps, oscillatory patterns, and possible outliersAdaptive Dantzig density estimationAdaptive decomposition-based evolutionary approach for multiobjective sparse reconstructionStatistical analysis of sparse approximate factor modelsSharp support recovery from noisy random measurements by \(\ell_1\)-minimizationTwo are better than one: fundamental parameters of frame coherenceAdapting to unknown noise level in sparse deconvolutionMinimax risks for sparse regressions: ultra-high dimensional phenomenonsThe Lasso problem and uniquenessLimiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matricesHonest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalizationOn the conditions used to prove oracle results for the LassoThe adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)Invertibility of random submatrices via tail-decoupling and a matrix Chernoff inequalityUPS delivers optimal phase diagram in high-dimensional variable selectionEstimation and variable selection with exponential weightsConcentration of \(S\)-largest mutilated vectors with \(\ell_p\)-quasinorm for \(0<p\leq 1\) and its applicationsConsistency of \(\ell_1\) recovery from noisy deterministic measurementsAn Introduction to Compressed SensingCovariate assisted screening and estimationA necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimizationNormalized and standard Dantzig estimators: two approachesA significance test for the lassoDiscussion: ``A significance test for the lassoRejoinder: ``A significance test for the lassoPivotal estimation via square-root lasso in nonparametric regressionThe generalized Lasso problem and uniquenessDeterministic matrices matching the compressed sensing phase transitions of Gaussian random matricesReconstructing DNA copy number by penalized estimation and imputationCompressed sensing with coherent and redundant dictionariesA global homogeneity test for high-dimensional linear regressionPrediction error bounds for linear regression with the TREXHigh-dimensional Gaussian model selection on a Gaussian designRandomized pick-freeze for sparse Sobol indices estimation in high dimensionChaotic Binary Sensing MatricesOn the sensitivity of the Lasso to the number of predictor variablesA general theory of concave regularization for high-dimensional sparse estimation problemsAnalysis of sparse MIMO radarUnderstanding large text corpora via sparse machine learningSharp oracle inequalities for low-complexity priorsl1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression StudiesSampling from non-smooth distributions through Langevin diffusionUnnamed ItemLocalized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregationGroup sparse optimization for learning predictive state representationsA NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMINGSubmatrices with NonUniformly Selected Random Supports and Insights into Sparse ApproximationDiscussion: ``A significance test for the lassoDiscussion: ``A significance test for the lassoDiscussion: ``A significance test for the lassoDiscussion: ``A significance test for the lassoDiscussion: ``A significance test for the lasso


Uses Software



Cites Work




This page was built for publication: Near-ideal model selection by \(\ell _{1}\) minimization