Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
From MaRDI portal
Publication:2039235
DOI10.1007/s10107-020-01506-0zbMath1471.65051arXiv1805.02632OpenAlexW3025544109MaRDI QIDQ2039235
Robert M. Gower, Francis Bach, Peter Richtárik
Publication date: 2 July 2021
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1805.02632
Numerical mathematical programming methods (65K05) Convex programming (90C25) Stochastic programming (90C15)
Related Items
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods, Scalable subspace methods for derivative-free nonlinear least-squares optimization, Unified analysis of stochastic gradient methods for composite convex and smooth optimization, LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
Uses Software
Cites Work
- Unnamed Item
- Parallel coordinate descent methods for big data optimization
- Minimizing finite sums with the stochastic average gradient
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- Introductory lectures on convex optimization. A basic course.
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Control variates for quasi-Monte Carlo (with comments and rejoinder)
- Eigenvalues of circulant matrices
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- Modification Methods for Inverting Matrices and Solving Systems of Linear Algebraic Equations
- Randomized Iterative Methods for Linear Systems
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms
- Katyusha: the first direct acceleration of stochastic gradient methods
- Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
- A stochastic quasi-gradient method of solving optimization problems in Hilbert space
- The Fast Johnson–Lindenstrauss Transform and Approximate Nearest Neighbors
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Family of Variable-Metric Methods Derived by Variational Means
- A Stochastic Approximation Method