A unified approach to model selection and sparse recovery using regularized least squares
From MaRDI portal
Publication:117370
DOI10.1214/09-aos683zbMath1369.62156arXiv0905.3573MaRDI QIDQ117370
Jinchi Lv, Yingying Fan, Yingying Fan, Jinchi Lv
Publication date: 1 December 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0905.3573
model selection; high dimensionality; sparse recovery; regularized least squares; concave penalty; weak oracle property
62G08: Nonparametric regression and quantile regression
62J05: Linear regression; mixed models
94A12: Signal theory (characterization, reconstruction, filtering, etc.)
Related Items
GGMncv, Manifold elastic net: a unified framework for sparse dimension reduction, Comments on: \(\ell _{1}\)-penalization for mixture regression models, Solve exactly an under determined linear system by minimizing least squares regularized with an \(\ell_0\) penalty, Parametric or nonparametric? A parametricness index for model selection, Variable selection using penalized empirical likelihood, Sparse estimation in functional linear regression, Bridge estimation for generalized linear models with a diverging number of parameters
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Regularization in statistics
- Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models
- High-dimensional classification using features annealed independence rules
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Variable selection in semiparametric regression modeling
- Variable selection using MM algorithms
- Better Subset Regression Using the Nonnegative Garrote
- A Short Proof and a Generalization of Miranda's Existence Theorem
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- Just relax: convex programming methods for identifying sparse signals in noise
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- DASSO: Connections Between the Dantzig Selector and Lasso
- Atomic Decomposition by Basis Pursuit
- Ideal spatial adaptation by wavelet shrinkage
- Regularization of Wavelet Approximations
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Local Strong Homogeneity of a Regularized Estimator
- A Statistical View of Some Chemometrics Regression Tools
- Tuning parameter selectors for the smoothly clipped absolute deviation method
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization