Kernel-based online gradient descent using distributed approach
From MaRDI portal
Publication:2668552
DOI10.3934/MFC.2019001zbMATH Open1486.68144OpenAlexW2923373465WikidataQ128171452 ScholiaQ128171452MaRDI QIDQ2668552FDOQ2668552
Authors: Xiaming Chen
Publication date: 7 March 2022
Published in: Mathematical Foundations of Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.3934/mfc.2019001
Recommendations
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Online learning algorithms
- A Stochastic Approximation Method
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Stochastic Estimation of the Maximum of a Regression Function
- Online regression with varying Gaussians and non-identical distributions
- ONLINE LEARNING WITH MARKOV SAMPLING
- Online Regularized Classification Algorithms
- Distributed learning with regularized least squares
- Online Pairwise Learning Algorithms
- Online minimum error entropy algorithm with unbounded sampling
Cited In (5)
Uses Software
This page was built for publication: Kernel-based online gradient descent using distributed approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2668552)