Accelerating variance-reduced stochastic gradient methods
DOI10.1007/S10107-020-01566-2zbMATH Open1489.90113arXiv1910.09494OpenAlexW3084718985MaRDI QIDQ2118092FDOQ2118092
Authors: Derek Driggs, Matthias J. Ehrhardt, Carola-Bibiane Schönlieb
Publication date: 22 March 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.09494
Recommendations
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Katyusha: the first direct acceleration of stochastic gradient methods
- Katyusha: the first direct acceleration of stochastic gradient methods
- Asymptotic estimates for \(r\)-Whitney numbers of the second kind
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Analysis of algorithms and problem complexity (68Q25) Stochastic programming (90C15) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Robust principal component analysis?
- Linear coupling: an ultimate unification of gradient and mirror descent
- A Stochastic Approximation Method
- Exact matrix completion via convex optimization
- Signal Recovery by Proximal Forward-Backward Splitting
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- A proximal stochastic gradient method with progressive variance reduction
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Minimizing finite sums with the stochastic average gradient
- Optimization methods for large-scale machine learning
- Catalyst acceleration for first-order convex optimization: from theory to practice
- An optimal randomized incremental gradient method
- Katyusha: the first direct acceleration of stochastic gradient methods
- SpiderBoost
Cited In (14)
- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- A mini-batch stochastic conjugate gradient algorithm with variance reduction
- An aggressive reduction on the complexity of optimization for non-strongly convex objectives
- Accelerated stochastic variance reduction for a class of convex optimization problems
- An improvement of stochastic gradient descent approach for mean-variance portfolio optimization problem
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Accelerated gradient methods with absolute and relative noise in the gradient
- Nonconvex optimization with inertial proximal stochastic variance reduction gradient
- Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods
- Batching Adaptive Variance Reduction
- Katyusha: the first direct acceleration of stochastic gradient methods
- Katyusha: the first direct acceleration of stochastic gradient methods
- Variance reduction for root-finding problems
- Analysis and improvement for a class of variance reduced methods
Uses Software
This page was built for publication: Accelerating variance-reduced stochastic gradient methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2118092)