Model selection in high-dimensional quantile regression with seamless L₀ penalty
From MaRDI portal
Publication:900968
DOI10.1016/J.SPL.2015.09.011zbMATH Open1328.62147arXiv1506.01648OpenAlexW2963709093MaRDI QIDQ900968FDOQ900968
Authors: Gabriela Ciuperca
Publication date: 23 December 2015
Published in: Statistics \& Probability Letters (Search for Journal in Brave)
Abstract: In this paper we are interested in parameters estimation of linear model when number of parameters increases with sample size. Without any assumption about moments of the model error, we propose and study the seamless quantile estimator. For this estimator we first give the convergence rate. Afterwards, we prove that it correctly distinguishes between zero and nonzero parameters and that the estimators of the nonzero parameters are asymptotically normal. A consistent BIC criterion to select the tuning parameters is given.
Full work available at URL: https://arxiv.org/abs/1506.01648
Recommendations
- Adaptive penalized quantile regression for high dimensional data
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Two-step variable selection in quantile regression models
- Variable selection in high-dimensional quantile varying coefficient models
- scientific article; zbMATH DE number 6162361
Cites Work
- Limiting distributions for \(L_1\) regression estimators under general conditions
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Adaptive robust variable selection
- Nonconcave penalized likelihood with a diverging number of parameters.
- Title not available (Why is that?)
- Composite quantile regression and the oracle model selection theory
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Model Selection via Bayesian Information Criterion for Quantile Regression Models
- Adaptive penalized quantile regression for high dimensional data
- Strong oracle optimality of folded concave penalized estimation
- Variable selection in quantile regression
- Variable selection and estimation in generalized linear models with the seamless \(L_0\) penalty
Cited In (11)
- Variable selection via generalized SELO-penalized linear regression models
- Variable selection via generalized SELO-penalized Cox regression models
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Moderate deviations for quantile regression processes
- The growth rate of significant regressors for high dimensional data
- Adaptive elastic-net selection in a quantile model with diverging number of variable groups
- Regularized simultaneous model selection in multiple quantiles regression
- Adaptive group Lasso selection in quantile models
- Quantile universal threshold
- Automatic selection by penalized asymmetric L q -norm in a high-dimensional model with grouped variables
- Title not available (Why is that?)
This page was built for publication: Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q900968)