Fast and strong convergence of online learning algorithms
From MaRDI portal
Publication:2305549
DOI10.1007/s10444-019-09707-8zbMath1433.68344arXiv1710.03600OpenAlexW2964017994WikidataQ127743100 ScholiaQ127743100MaRDI QIDQ2305549
Publication date: 11 March 2020
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.03600
Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Online algorithms; streaming algorithms (68W27)
Related Items (6)
Online gradient descent algorithms for functional data learning ⋮ Distributed spectral pairwise ranking algorithms ⋮ Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces ⋮ Capacity dependent analysis for functional online learning algorithms ⋮ Online regularized learning algorithm for functional data ⋮ Convergence analysis of online learning algorithm with two-stage step size
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonparametric stochastic approximation with large step-sizes
- Unregularized online learning algorithms with general loss functions
- Pegasos: primal estimated sub-gradient solver for SVM
- Regularization in kernel learning
- Combined \(\ell_{2}\) data and gradient fitting in conjunction with \(\ell_{1}\) regularization
- Online gradient descent learning algorithms
- The covering number in learning theory
- Online regularized learning with pairwise loss functions
- Optimal rates for the regularized least-squares algorithm
- Convergence analysis of online algorithms
- Online learning algorithms
- Boosting with early stopping: convergence and consistency
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Learning Theory
- Support Vector Machines
- Online Regularized Classification Algorithms
- A new concentration result for regularized risk minimizers
- Robust Stochastic Approximation Approach to Stochastic Programming
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- On Complexity Issues of Online Learning Algorithms
- Online Learning with Kernels
- Online Pairwise Learning Algorithms
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Theory of Reproducing Kernels
- Scattered Data Approximation
- Smoothing spline ANOVA models
This page was built for publication: Fast and strong convergence of online learning algorithms