Learning rates for the risk of kernel-based quantile regression estimators in additive models
From MaRDI portal
Publication:2805231
DOI10.1142/S0219530515500050zbMATH Open1338.62077arXiv1405.3379MaRDI QIDQ2805231FDOQ2805231
Ding-Xuan Zhou, Andreas Christmann
Publication date: 10 May 2016
Published in: Analysis and Applications (Singapore) (Search for Journal in Brave)
Abstract: Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel based methods for additive models. These learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaussian function depending only on one variable lies in a reproducing kernel Hilbert space generated by an additive Gaussian kernel, but does not belong to the reproducing kernel Hilbert space generated by the multivariate Gaussian kernel of the same variance.
Full work available at URL: https://arxiv.org/abs/1405.3379
Recommendations
- Learning rates for kernel-based expectile regression
- Conditional quantiles with varying Gaussians
- Learning rates of least-square regularized regression
- Consistency of support vector machines using additive kernels for additive models
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
Nonparametric estimation (62G05) Nonparametric regression and quantile regression (62G08) Computational learning theory (68Q32)
Cites Work
- Title not available (Why is that?)
- Support-vector networks
- Title not available (Why is that?)
- Title not available (Why is that?)
- Generalized additive models
- Learning Theory
- Title not available (Why is that?)
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- Scattered Data Approximation
- Title not available (Why is that?)
- Title not available (Why is that?)
- Shannon sampling. II: Connections to learning theory
- On consistency and robustness properties of support vector machines for heavy-tailed distributions
- Classification with Gaussians and convex loss
- ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS
Cited In (16)
- Robust wavelet-based estimation for varying coefficient dynamic models under long-dependent structures
- On the K-functional in learning theory
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
- Communication-efficient estimation of high-dimensional quantile regression
- Sparse additive machine with ramp loss
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- HARFE: hard-ridge random feature expansion
- Generalized support vector regression: Duality and tensor-kernel representation
- On the robustness of regularized pairwise learning methods based on kernels
- Theory of deep convolutional neural networks. II: Spherical analysis
- Error analysis for coefficient-based regularized regression in additive models
- Sparse additive support vector machines in bounded variation space
- Asymptotic analysis for affine point processes with large initial intensity
- A new large-scale learning algorithm for generalized additive models
- Optimal learning with Gaussians and correntropy loss
Uses Software
This page was built for publication: Learning rates for the risk of kernel-based quantile regression estimators in additive models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2805231)