Learning rates of regression with q-norm loss and threshold
From MaRDI portal
Publication:2835987
Abstract: This paper studies some robust regression problems associated with the -norm loss () and the -insensitive -norm loss in the reproducing kernel Hilbert space. We establish a variance-expectation bound under a priori noise condition on the conditional distribution, which is the key technique to measure the error bound. Explicit learning rates will be given under the approximation ability assumptions on the reproducing kernel Hilbert space.
Recommendations
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Learning rates of least-square regularized regression
- Optimal rate of the regularized regression learning algorithm
- Learning with varying insensitive loss
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
Cites work
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- 10.1162/153244302760200713
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Classification with Gaussians and convex loss
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Indefinite kernel network with dependent sampling
- Learning with varying insensitive loss
- Online learning for quantile regression and support vector regression
- Regularization schemes for minimum error entropy principle
- Support vector machine soft margin classifiers: error analysis
Cited in
(3)
This page was built for publication: Learning rates of regression with \(q\)-norm loss and threshold
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2835987)