Stochastic variance reduced gradient methods using a trust-region-like scheme
From MaRDI portal
Publication:1995995
DOI10.1007/s10915-020-01402-xzbMath1461.90071OpenAlexW3131341170MaRDI QIDQ1995995
Jie Sun, Yu-Hong Dai, Tengteng Yu, Xin-Wei Liu
Publication date: 2 March 2021
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10915-020-01402-x
trust regionempirical risk minimizationBarzilai-Borwein stepsizesmini-batchesstochastic variance reduced gradient
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Applications of mathematical programming (90C90) Nonlinear programming (90C30)
Related Items (3)
Nonconvex optimization with inertial proximal stochastic variance reduction gradient ⋮ A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize ⋮ Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
Uses Software
Cites Work
- Unnamed Item
- A Barzilai-Borwein type method for stochastic linear complementarity problems
- Quadratic regularization projected Barzilai-Borwein method for nonnegative matrix factorization
- Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
- Recent advances in trust region algorithms
- Smoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimization
- A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
- Large-Scale Machine Learning with Stochastic Gradient Descent
- A Positive Barzilai–Borwein-Like Stepsize and an Extension for Symmetric Linear Systems
- Two-Point Step Size Gradient Methods
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- Optimization Methods for Large-Scale Machine Learning
- Katyusha: the first direct acceleration of stochastic gradient methods
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Gradient Descent on Riemannian Manifolds
- Understanding Machine Learning
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A Stochastic Approximation Method
This page was built for publication: Stochastic variance reduced gradient methods using a trust-region-like scheme