Model selection in high-dimensional quantile regression with seamless L₀ penalty
From MaRDI portal
Publication:900968
Abstract: In this paper we are interested in parameters estimation of linear model when number of parameters increases with sample size. Without any assumption about moments of the model error, we propose and study the seamless quantile estimator. For this estimator we first give the convergence rate. Afterwards, we prove that it correctly distinguishes between zero and nonzero parameters and that the estimators of the nonzero parameters are asymptotically normal. A consistent BIC criterion to select the tuning parameters is given.
Recommendations
- Adaptive penalized quantile regression for high dimensional data
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Two-step variable selection in quantile regression models
- Variable selection in high-dimensional quantile varying coefficient models
- scientific article; zbMATH DE number 6162361
Cites work
- scientific article; zbMATH DE number 6162361 (Why is no real title available?)
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Adaptive penalized quantile regression for high dimensional data
- Adaptive robust variable selection
- Composite quantile regression and the oracle model selection theory
- Limiting distributions for \(L_1\) regression estimators under general conditions
- Model Selection via Bayesian Information Criterion for Quantile Regression Models
- Nonconcave penalized likelihood with a diverging number of parameters.
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Strong oracle optimality of folded concave penalized estimation
- Variable selection and estimation in generalized linear models with the seamless \(L_0\) penalty
- Variable selection in quantile regression
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
Cited in
(12)- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Regularized simultaneous model selection in multiple quantiles regression
- Variable selection in convex quantile regression: \(\mathcal{L}_1\)-norm or \(\mathcal{L}_0\)-norm regularization?
- Automatic selection by penalized asymmetric L q -norm in a high-dimensional model with grouped variables
- Moderate deviations for quantile regression processes
- Variable selection via generalized SELO-penalized Cox regression models
- Quantile universal threshold
- The growth rate of significant regressors for high dimensional data
- Variable selection via generalized SELO-penalized linear regression models
- Adaptive elastic-net selection in a quantile model with diverging number of variable groups
- scientific article; zbMATH DE number 6162361 (Why is no real title available?)
- Adaptive group Lasso selection in quantile models
This page was built for publication: Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q900968)