Discussion: One-step sparse estimates in nonconcave penalized likelihood models
From MaRDI portal
Publication:5966368
DOI10.1214/07-AOS0316CzbMath1282.62110arXiv0808.1025OpenAlexW2086621676MaRDI QIDQ5966368
Publication date: 28 August 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Abstract: Discussion of ``One-step sparse estimates in nonconcave penalized likelihood models [arXiv:0808.1012]
Full work available at URL: https://arxiv.org/abs/0808.1025
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Monte Carlo methods (65C05)
Cites Work
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Nonconcave penalized likelihood with a diverging number of parameters.
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A Statistical View of Some Chemometrics Regression Tools
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution
Related Items (5)
A Unified View of Exact Continuous Penalties for $\ell_2$-$\ell_0$ Minimization ⋮ Majorization-minimization algorithms for nonsmoothly penalized objective functions ⋮ Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator ⋮ Nearly unbiased variable selection under minimax concave penalty ⋮ Fast selection of nonlinear mixed effect models using penalized likelihood
This page was built for publication: Discussion: One-step sparse estimates in nonconcave penalized likelihood models