On the Adaptivity of Stochastic Gradient-Based Optimization
From MaRDI portal
Publication:5114394
DOI10.1137/19M1256919zbMath1445.90066arXiv1904.04480OpenAlexW2937268935MaRDI QIDQ5114394
Publication date: 22 June 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.04480
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15)
Related Items (3)
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization ⋮ Variance reduction on general adaptive stochastic mirror descent ⋮ Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise
Uses Software
Cites Work
- Unnamed Item
- An optimal method for stochastic composite optimization
- Universal gradient methods for convex optimization problems
- On the uniform convexity of \(L^p\) and \(l^p\)
- New method of stochastic approximation type
- Martingales with values in uniformly convex spaces
- Introductory lectures on convex optimization. A basic course.
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- An optimal algorithm for stochastic strongly-convex optimization
- Nemirovski's Inequalities Revisited
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Katyusha: the first direct acceleration of stochastic gradient methods
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Accelerate stochastic subgradient method by leveraging local growth condition
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: On the Adaptivity of Stochastic Gradient-Based Optimization