Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty
From MaRDI portal
Publication:900968
DOI10.1016/j.spl.2015.09.011zbMath1328.62147arXiv1506.01648OpenAlexW2963709093MaRDI QIDQ900968
Publication date: 23 December 2015
Published in: Statistics \& Probability Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1506.01648
Related Items (5)
Adaptive group Lasso selection in quantile models ⋮ Variable selection via generalized SELO-penalized linear regression models ⋮ Variable selection via generalized SELO-penalized Cox regression models ⋮ Adaptive elastic-net selection in a quantile model with diverging number of variable groups ⋮ Moderate deviations for quantile regression processes
Cites Work
- Unnamed Item
- Unnamed Item
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Composite quantile regression and the oracle model selection theory
- Limiting distributions for \(L_1\) regression estimators under general conditions
- Nonconcave penalized likelihood with a diverging number of parameters.
- Adaptive penalized quantile regression for high dimensional data
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Adaptive robust variable selection
- Strong oracle optimality of folded concave penalized estimation
- Variable selection and estimation in generalized linear models with the seamless ${\it L}_{{\rm 0}}$ penalty
- Model Selection via Bayesian Information Criterion for Quantile Regression Models
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty