Convergence analysis of online learning algorithm with two-stage step size
From MaRDI portal
Publication:2698633
DOI10.1515/ijnsns-2020-0155OpenAlexW3196814160MaRDI QIDQ2698633
Publication date: 24 April 2023
Published in: International Journal of Nonlinear Sciences and Numerical Simulation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1515/ijnsns-2020-0155
Computational learning theory (68Q32) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Online gradient descent learning algorithms
- Fast and strong convergence of online learning algorithms
- Online regularized learning with pairwise loss functions
- Fully online classification by regularization
- Optimal rates for the regularized least-squares algorithm
- Online learning algorithms
- On the mathematical foundations of learning
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Learning Theory
- Online Regularized Classification Algorithms
This page was built for publication: Convergence analysis of online learning algorithm with two-stage step size