A Convergence Study of SGD-Type Methods for Stochastic Optimization
From MaRDI portal
Publication:6151338
DOI10.4208/nmtma.oa-2022-0179arXiv2211.06197MaRDI QIDQ6151338
Publication date: 11 March 2024
Published in: Numerical Mathematics: Theory, Methods and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2211.06197
Computational methods in Markov chains (60J22) Central limit and other weak theorems (60F05) Dynamical systems in optimization and economics (37N40)
Cites Work
- Finite-sum smooth optimization with SARAH
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- Acceleration of Stochastic Approximation by Averaging
- Gradient Convergence in Gradient methods with Errors
- Optimization Methods for Large-Scale Machine Learning
- Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem
- New Convergence Aspects of Stochastic Gradient Algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Some methods of speeding up the convergence of iteration methods
- Unnamed Item
This page was built for publication: A Convergence Study of SGD-Type Methods for Stochastic Optimization