Nonconcave penalized likelihood with a diverging number of parameters.

From MaRDI portal
Revision as of 12:58, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1879926


DOI10.1214/009053604000000256zbMath1092.62031arXivmath/0406466MaRDI QIDQ1879926

Heng Peng, Jianqing Fan

Publication date: 15 September 2004

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0406466


62F12: Asymptotic properties of parametric estimators

62E20: Asymptotic distribution theory in statistics

62F03: Parametric hypothesis testing


Related Items

Discussion: One-step sparse estimates in nonconcave penalized likelihood models, Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint, Regularization in statistics, One-step sparse estimates in nonconcave penalized likelihood models, Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models, ``Preconditioning for feature selection and regression in high-dimensional problems, Dimension reduction based on constrained canonical correlation and variable filtering, Profile-kernel likelihood inference with diverging number of parameters, When do stepwise algorithms meet subset selection criteria?, Nonconcave penalized inverse regression in single-index models with high dimensional predic\-tors, Relaxed Lasso, SCAD-penalized regression in high-dimensional partially linear models, Estimating the dimension of a model, Asymptotic properties of bridge estimators in sparse high-dimensional regression models, On the ``degrees of freedom of the lasso, The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder)., Rodeo: Sparse, greedy nonparametric regression, Properties of principal component methods for functional and longitudinal data analysis, Variable selection using MM algorithms, Piecewise linear regularized solution paths, Covariate Selection for Linear Errors-in-Variables Regression Models, Logistic Discrimination with Total Variation Regularization


Uses Software


Cites Work