Accelerating variance-reduced stochastic gradient methods
DOI10.1007/S10107-020-01566-2zbMATH Open1489.90113arXiv1910.09494OpenAlexW3084718985MaRDI QIDQ2118092FDOQ2118092
Matthias J. Ehrhardt, Derek Driggs, Carola-Bibiane Schönlieb
Publication date: 22 March 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.09494
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Analysis of algorithms and problem complexity (68Q25) Stochastic programming (90C15) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Robust principal component analysis?
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- A Stochastic Approximation Method
- Exact matrix completion via convex optimization
- Signal Recovery by Proximal Forward-Backward Splitting
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Minimizing finite sums with the stochastic average gradient
- Optimization Methods for Large-Scale Machine Learning
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- An optimal randomized incremental gradient method
- SpiderBoost
Cited In (7)
- An aggressive reduction on the complexity of optimization for non-strongly convex objectives
- Accelerated stochastic variance reduction for a class of convex optimization problems
- An improvement of stochastic gradient descent approach for mean-variance portfolio optimization problem
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Accelerated gradient methods with absolute and relative noise in the gradient
- Nonconvex optimization with inertial proximal stochastic variance reduction gradient
- Batching Adaptive Variance Reduction
Uses Software
This page was built for publication: Accelerating variance-reduced stochastic gradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2118092)