Combined-penalized likelihood estimations with a diverging number of parameters
From MaRDI portal
Publication:3179246
DOI10.1080/02664763.2013.868415zbMath1352.62115OpenAlexW1974639773MaRDI QIDQ3179246
Mingqiu Wang, Ying Xu, Ying Dong, Lixin Song
Publication date: 21 December 2016
Published in: Journal of Applied Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/02664763.2013.868415
asymptotic normalityBayesian information criterionvariable selectionoracle propertycombined-penalization
Asymptotic properties of parametric estimators (62F12) Ridge regression; shrinkage estimators (Lasso) (62J07)
Related Items (1)
Cites Work
- Unnamed Item
- Quadratic approximation for nonconvex penalized estimations with a diverging number of parameters
- Empirical likelihood for a varying coefficient partially linear model with diverging number of parameters
- Profile-kernel likelihood inference with diverging number of parameters
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- On the adaptive elastic net with a diverging number of parameters
- Variable selection via combined penalization for high-dimensional data analysis
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Regularization in Finite Mixture of Regression Models with Diverging Number of Parameters
- Shrinkage Tuning Parameter Selection with a Diverging number of Parameters
- On ordinary ridge regression in generalized linear models
- Ideal spatial adaptation by wavelet shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Regularization and Variable Selection Via the Elastic Net
- Tuning parameter selectors for the smoothly clipped absolute deviation method
This page was built for publication: Combined-penalized likelihood estimations with a diverging number of parameters