Convergence analysis of online algorithms
From MaRDI portal
Publication:2454719
DOI10.1007/s10444-005-9002-zzbMath1129.68070WikidataQ58759011 ScholiaQ58759011MaRDI QIDQ2454719
Publication date: 16 October 2007
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-005-9002-z
reproducing kernel Hilbert space; general loss function; online learning algorithm; regularized sample error
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Derivative reproducing properties for kernel methods in learning theory, Gradient learning in a classification setting by gradient descent
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-kernel regularized classifiers
- The covering number in learning theory
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Regularization networks and support vector machines
- Online learning algorithms
- Shannon sampling. II: Connections to learning theory
- On the mathematical foundations of learning
- Capacity of reproducing kernel spaces in learning theory
- Online Regularized Classification Algorithms
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Shannon sampling and function reconstruction from point values
- 10.1162/1532443041424319
- Online Learning with Kernels
- Learning Theory
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels