On the Convergence Rate of Incremental Aggregated Gradient Algorithms
From MaRDI portal
Publication:5266533
DOI10.1137/15M1049695zbMath1366.90195arXiv1506.02081MaRDI QIDQ5266533
Pablo A. Parrilo, Asuman Ozdaglar, Mert Gürbüzbalaban
Publication date: 16 June 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1506.02081
90C25: Convex programming
90C06: Large-scale problems in mathematical programming
90C30: Nonlinear programming
Related Items
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Optimization Methods for Large-Scale Machine Learning, IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Incremental gradient algorithms with stepsizes bounded away from zero
- The incremental Gauss-Newton algorithm with adaptive stepsize rule
- Introductory lectures on convex optimization. A basic course.
- Why random reshuffling beats stochastic gradient descent
- Incrementally updated gradient methods for constrained and regularized optimization
- A globally convergent incremental Newton method
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Incremental Least Squares Methods and the Extended Kalman Filter
- Convergence Rate of Incremental Gradient and Incremental Newton Methods
- A Convergent Incremental Gradient Method with a Constant Step Size
- On‐line learning for very large data sets