Discussion: One-step sparse estimates in nonconcave penalized likelihood models
From MaRDI portal
Publication:5966368
DOI10.1214/07-AOS0316CzbMath1282.62110arXiv0808.1025OpenAlexW2086621676MaRDI QIDQ5966368
Publication date: 28 August 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0808.1025
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (5)
A Unified View of Exact Continuous Penalties for $\ell_2$-$\ell_0$ Minimization ⋮ Majorization-minimization algorithms for nonsmoothly penalized objective functions ⋮ Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator ⋮ Nearly unbiased variable selection under minimax concave penalty ⋮ Fast selection of nonlinear mixed effect models using penalized likelihood
Cites Work
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Nonconcave penalized likelihood with a diverging number of parameters.
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Statistical View of Some Chemometrics Regression Tools
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution
This page was built for publication: Discussion: One-step sparse estimates in nonconcave penalized likelihood models