Scaling up stochastic gradient descent for non-convex optimisation
From MaRDI portal
Publication:6097095
DOI10.1007/s10994-022-06243-3arXiv2210.02882OpenAlexW4303446428MaRDI QIDQ6097095
Saad Mohamad, Hamad Alamri, Abdelhamid Bouchachia
Publication date: 12 June 2023
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2210.02882
stochastic gradient descentvariational inferencedeep reinforcement learningdistributed and parallel computationlarge scale non-convex optimisation
Cites Work
- Unnamed Item
- Unnamed Item
- Simple statistical gradient-following algorithms for connectionist reinforcement learning
- An introduction to variational methods for graphical models
- The vectorization of ITPACK 2C
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Robust Stochastic Approximation Approach to Stochastic Programming
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Optimization Methods for Large-Scale Machine Learning
- 10.1162/jmlr.2003.3.4-5.993
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method