Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
From MaRDI portal
Publication:5076671
DOI10.1137/21M1394308zbMath1492.90135arXiv2002.05359OpenAlexW3005728424MaRDI QIDQ5076671
Michael I. Jordan, Samuel Horváth, Peter Richtárik, Lihua Lei
Publication date: 17 May 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2002.05359
Related Items
Stochastic momentum methods for non-convex learning without bounded assumptions, Byzantine-robust loopless stochastic variance-reduced gradient, Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Universal gradient methods for convex optimization problems
- New method of stochastic approximation type
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Fastest rates for stochastic mirror descent methods
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Distributed optimization with arbitrary local solvers
- Semi-stochastic coordinate descent
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- On the Adaptivity of Stochastic Gradient-Based Optimization
- Accelerate stochastic subgradient method by leveraging local growth condition
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Some methods of speeding up the convergence of iteration methods