Accelerating mini-batch SARAH by step size rules
From MaRDI portal
Publication:2127094
DOI10.1016/j.ins.2020.12.075zbMath1484.90060arXiv1906.08496OpenAlexW3123987833MaRDI QIDQ2127094
Cheng Wang, Zhuang Yang, Zeng-Ping Chen
Publication date: 19 April 2022
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.08496
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- On stochastic gradient and subgradient methods with adaptive steplength sequences
- Sample size selection in optimization methods for machine learning
- Introductory lectures on convex optimization. A basic course.
- Stochastic gradient descent with Barzilai-Borwein update step for SVM
- Accelerated Stochastic Approximation
- Two-Point Step Size Gradient Methods
- Probabilistic Line Searches for Stochastic Optimization
- Optimization Methods for Large-Scale Machine Learning
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Stochastic Approximation Method
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: Accelerating mini-batch SARAH by step size rules