A stochastic variance reduced gradient method with adaptive step for stochastic optimization
From MaRDI portal
Publication:6565722
DOI10.1002/OCA.3109zbMATH Open1546.90091MaRDI QIDQ6565722FDOQ6565722
Authors: Jing Li, Dan Xue, Lei Liu, Rulei Qi
Publication date: 2 July 2024
Published in: Optimal Control Applications \& Methods (Search for Journal in Brave)
Recommendations
- Analysis and improvement for a class of variance reduced methods
- A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information
- Improved SVRG for finite sum structure optimization with application to binary classification
- Stochastic gradient method with Barzilai-Borwein step for unconstrained nonlinear optimization
- Stochastic variance reduced gradient methods using a trust-region-like scheme
machine learningstochastic programmingstochastic variance reduced gradientBarzilai-Borwein step sizes
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- A Stochastic Approximation Method
- Two-Point Step Size Gradient Methods
- Understanding machine learning. From theory to algorithms
- Sample size selection in optimization methods for machine learning
- The cyclic Barzilai-–Borwein method for unconstrained optimization
- New adaptive stepsize selections in gradient methods
- Analysis of monotone gradient methods
- An efficient gradient method using the Yuan steplength
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Gradient methods with adaptive step-sizes
- A Positive Barzilai–Borwein-Like Stepsize and an Extension for Symmetric Linear Systems
- Minimizing finite sums with the stochastic average gradient
- Optimization methods for large-scale machine learning
- Stochastic variance reduced gradient methods using a trust-region-like scheme
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Equipping the Barzilai-Borwein method with the two dimensional quadratic termination property
- Balancing rates and variance via adaptive batch-size for stochastic optimization problems
Cited In (7)
- A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize
- On adaptive stochastic heavy ball momentum for solving linear systems
- A stochastic gradient method with variance control and variable learning rate for deep learning
- Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction
- Scale invariant stochastic gradient method with momentum
- A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information
- A semismooth Newton stochastic proximal point algorithm with variance reduction
This page was built for publication: A stochastic variance reduced gradient method with adaptive step for stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6565722)