Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
From MaRDI portal
Publication:547325
DOI10.1007/s11425-010-4043-2zbMath1215.68203MaRDI QIDQ547325
Publication date: 1 July 2011
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11425-010-4043-2
Sobolev space; reproducing kernel Hilbert space; binary classification; general convex loss; Tsybakov noise condition
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Comparison theorems on large-margin learning, Learning Rates for Classification with Gaussian Kernels, Learning rates of kernel-based robust classification, Quantitative convergence analysis of kernel based large-margin unified machines, The convergence rates of Shannon sampling learning algorithms
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Learning Theory
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Convexity, Classification, and Risk Bounds