Learning rates for the risk of kernel-based quantile regression estimators in additive models
From MaRDI portal
Publication:2805231
Abstract: Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel based methods for additive models. These learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaussian function depending only on one variable lies in a reproducing kernel Hilbert space generated by an additive Gaussian kernel, but does not belong to the reproducing kernel Hilbert space generated by the multivariate Gaussian kernel of the same variance.
Recommendations
- Learning rates for kernel-based expectile regression
- Conditional quantiles with varying Gaussians
- Learning rates of least-square regularized regression
- Consistency of support vector machines using additive kernels for additive models
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
Cites work
- scientific article; zbMATH DE number 5957364 (Why is no real title available?)
- scientific article; zbMATH DE number 47282 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 1843268 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- scientific article; zbMATH DE number 3336465 (Why is no real title available?)
- scientific article; zbMATH DE number 962825 (Why is no real title available?)
- Classification with Gaussians and convex loss
- Consistency of the group Lasso and multiple kernel learning
- Generalized additive models
- Learning Theory
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- On consistency and robustness properties of support vector machines for heavy-tailed distributions
- Online regression with varying Gaussians and non-identical distributions
- Scattered Data Approximation
- Shannon sampling. II: Connections to learning theory
- Support-vector networks
Cited in
(17)- Robust wavelet-based estimation for varying coefficient dynamic models under long-dependent structures
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Communication-efficient estimation of high-dimensional quantile regression
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Sparse additive machine with ramp loss
- HARFE: hard-ridge random feature expansion
- On the robustness of regularized pairwise learning methods based on kernels
- On the \(K\)-functional in learning theory
- Theory of deep convolutional neural networks. II: Spherical analysis
- Generalized support vector regression: duality and tensor-kernel representation
- Error analysis for coefficient-based regularized regression in additive models
- Asymptotic analysis for affine point processes with large initial intensity
- Sparse additive support vector machines in bounded variation space
- Learning rates for kernel-based expectile regression
- A new large-scale learning algorithm for generalized additive models
- Optimal learning with Gaussians and correntropy loss
This page was built for publication: Learning rates for the risk of kernel-based quantile regression estimators in additive models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2805231)