Convergence of unregularized online learning algorithms
zbMATH Open1467.68226arXiv1708.02939MaRDI QIDQ4558495FDOQ4558495
Authors: Yunwen Lei, Lei Shi, Zheng-Chu Guo
Publication date: 22 November 2018
Full work available at URL: https://arxiv.org/abs/1708.02939
Recommendations
Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27) Strong limit theorems (60F15) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Martingales with continuous parameter (60G44) (L^p)-limit theorems (60F25)
Cites Work
- Pegasos: primal estimated sub-gradient solver for SVM
- Support-vector networks
- Learning Theory
- Online learning algorithms
- Nonparametric stochastic approximation with large step-sizes
- Learning Theory
- Support Vector Machines
- Probability Inequalities for Sums of Bounded Random Variables
- Robust Stochastic Approximation Approach to Stochastic Programming
- On the influence of the kernel on the consistency of support vector machines
- Support vector machine soft margin classifiers: error analysis
- Title not available (Why is that?)
- Efficient online and batch learning using forward backward splitting
- On the Generalization Ability of On-Line Learning Algorithms
- Online Learning with Kernels
- Online gradient descent learning algorithms
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Title not available (Why is that?)
- ONLINE LEARNING WITH MARKOV SAMPLING
- On Complexity Issues of Online Learning Algorithms
- Online Regularized Classification Algorithms
- Fully online classification by regularization
- Unregularized online learning algorithms with general loss functions
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Learning theory of randomized Kaczmarz algorithm
Cited In (11)
- Convergence of online mirror descent
- Fast and strong convergence of online learning algorithms
- Convergence analysis of online algorithms
- Online sufficient dimension reduction through sliced inverse regression
- Convergence analysis for kernel-regularized online regression associated with an RRKHS
- Unregularized online algorithms with varying Gaussians
- Online gradient descent learning algorithms
- Convergence analysis of online learning algorithm with two-stage step size
- Unregularized online learning algorithms with general loss functions
- Sparse online regression algorithm with insensitive loss functions
- Differentially private SGD with random features
Uses Software
This page was built for publication: Convergence of unregularized online learning algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4558495)