Statistical consistency of coefficient-based conditional quantile regression
DOI10.1016/J.JMVA.2016.03.006zbMATH Open1357.68164OpenAlexW2339576249MaRDI QIDQ290691FDOQ290691
Publication date: 3 June 2016
Published in: Journal of Multivariate Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmva.2016.03.006
Recommendations
- Asymptotic analysis of quantile regression learning based on coefficient dependent regularization
- Error analysis for coefficient-based regularized regression in additive models
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- Unified approach to coefficient-based regularized regression
- Convergence analysis of coefficient-based regularization under moment incremental condition
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- Title not available (Why is that?)
- Quantile regression.
- Atomic Decomposition by Basis Pursuit
- Regularization and Variable Selection Via the Elastic Net
- Theory of Reproducing Kernels
- Learning Theory
- Stable signal recovery from incomplete and inaccurate measurements
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Quantile Regression in Reproducing Kernel Hilbert Spaces
- Optimal aggregation of classifiers in statistical learning.
- Neural Network Learning
- The covering number in learning theory
- Learning theory estimates for coefficient-based regularized regression
- Concentration estimates for learning with unbounded sampling
- Conditional quantiles with varying Gaussians
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Convergence analysis of coefficient-based regularization under moment incremental condition
- Capacity of reproducing kernel spaces in learning theory
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Learning Theory
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Least square regression with indefinite kernels and coefficient regularization
- Estimating conditional quantiles with the help of the pinball loss
- Unified approach to coefficient-based regularized regression
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
Cited In (2)
Uses Software
This page was built for publication: Statistical consistency of coefficient-based conditional quantile regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q290691)