Nearly unbiased variable selection under minimax concave penalty
Publication:117379
DOI10.1214/09-AOS729zbMath1183.62120arXiv1002.4734OpenAlexW1965125844MaRDI QIDQ117379
Publication date: 1 April 2010
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1002.4734
minimaxmean squared errormodel selectionleast squaresnonconvex minimizationunbiasednessvariable selectionselection consistencydegrees of freedomrisk estimationpenalized estimationsign consistencycorrect selection
Factor analysis and principal components; correspondence analysis (62H25) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items (only showing first 100 items - show all)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Relaxed Lasso
- Estimation of the mean of a multivariate normal distribution
- Estimating the dimension of a model
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- On the degrees of freedom in shape-restricted regression.
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- The risk inflation criterion for multiple regression
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Variable selection using MM algorithms
- Piecewise linear regularized solution paths
- Decoding by Linear Programming
- Just relax: convex programming methods for identifying sparse signals in noise
- How Biased is the Apparent Error Rate of a Prediction Rule?
- Ideal spatial adaptation by wavelet shrinkage
- Regularization of Wavelet Approximations
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A new approach to variable selection in least squares problems
- On the Non-Negative Garrotte Estimator
- Some Comments on C P
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
This page was built for publication: Nearly unbiased variable selection under minimax concave penalty