Convergence of unregularized online learning algorithms
From MaRDI portal
Publication:4558495
Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27) Strong limit theorems (60F15) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Martingales with continuous parameter (60G44) (L^p)-limit theorems (60F25)
Recommendations
Cites work
- scientific article; zbMATH DE number 515978 (Why is no real title available?)
- scientific article; zbMATH DE number 1569102 (Why is no real title available?)
- Efficient online and batch learning using forward backward splitting
- Fully online classification by regularization
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Learning Theory
- Learning Theory
- Learning theory of randomized Kaczmarz algorithm
- Nonparametric stochastic approximation with large step-sizes
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Complexity Issues of Online Learning Algorithms
- On the Generalization Ability of On-Line Learning Algorithms
- On the influence of the kernel on the consistency of support vector machines
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Online Learning with Kernels
- Online Regularized Classification Algorithms
- Online gradient descent learning algorithms
- Online learning algorithms
- Pegasos: primal estimated sub-gradient solver for SVM
- Probability Inequalities for Sums of Bounded Random Variables
- Robust Stochastic Approximation Approach to Stochastic Programming
- Support Vector Machines
- Support vector machine soft margin classifiers: error analysis
- Support-vector networks
- Unregularized online learning algorithms with general loss functions
Cited in
(11)- Convergence of online mirror descent
- Fast and strong convergence of online learning algorithms
- Convergence analysis of online algorithms
- Online sufficient dimension reduction through sliced inverse regression
- Unregularized online algorithms with varying Gaussians
- Online gradient descent learning algorithms
- Convergence analysis for kernel-regularized online regression associated with an RRKHS
- Convergence analysis of online learning algorithm with two-stage step size
- Unregularized online learning algorithms with general loss functions
- Sparse online regression algorithm with insensitive loss functions
- Differentially private SGD with random features
This page was built for publication: Convergence of unregularized online learning algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4558495)