Statistical consistency of coefficient-based conditional quantile regression
From MaRDI portal
Publication:290691
DOI10.1016/j.jmva.2016.03.006zbMath1357.68164OpenAlexW2339576249MaRDI QIDQ290691
Publication date: 3 June 2016
Published in: Journal of Multivariate Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmva.2016.03.006
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Least square regression with indefinite kernels and coefficient regularization
- Estimating conditional quantiles with the help of the pinball loss
- Unified approach to coefficient-based regularized regression
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- The covering number in learning theory
- Optimal aggregation of classifiers in statistical learning.
- Weak convergence and empirical processes. With applications to statistics
- Learning theory estimates for coefficient-based regularized regression
- Concentration estimates for learning with unbounded sampling
- Conditional quantiles with varying Gaussians
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- Atomic Decomposition by Basis Pursuit
- Neural Network Learning
- Quantile Regression in Reproducing Kernel Hilbert Spaces
- Regularization and Variable Selection Via the Elastic Net
- Learning Theory
- Stable signal recovery from incomplete and inaccurate measurements
- Theory of Reproducing Kernels
This page was built for publication: Statistical consistency of coefficient-based conditional quantile regression