Convergence rate of incremental gradient and incremental Newton methods
DOI10.1137/17M1147846zbMATH Open1428.90119arXiv1510.08562OpenAlexW2980820424WikidataQ127020296 ScholiaQ127020296MaRDI QIDQ5237308FDOQ5237308
Authors: Mert Gürbüzbalaban, Asuman Ozdaglar, Pablo A. Parrilo
Publication date: 17 October 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1510.08562
Recommendations
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- A globally convergent incremental Newton method
- A Convergent Incremental Gradient Method with a Constant Step Size
- Convergence rate of incremental subgradient algorithms
- Incrementally updated gradient methods for constrained and regularized optimization
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Cites Work
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Introductory lectures on convex optimization. A basic course.
- Acceleration of Stochastic Approximation by Averaging
- Parallel stochastic gradient algorithms for large-scale matrix completion
- Convergence rate of incremental subgradient algorithms
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Logarithmic regret algorithms for online convex optimization
- Incremental gradient algorithms with stepsizes bounded away from zero
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Gradient Convergence in Gradient methods with Errors
- Distributed Subgradient Methods for Multi-Agent Optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- A Convergent Incremental Gradient Method with a Constant Step Size
- Incremental subgradient methods for nondifferentiable optimization
- On‐line learning for very large data sets
- Incremental Least Squares Methods and the Extended Kalman Filter
- A New Class of Incremental Gradient Methods for Least Squares Problems
- On a Stochastic Approximation Method
- A Collaborative Training Algorithm for Distributed Learning
- Convex optimization algorithms
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- An Adaptive Associative Memory Principle
- A globally convergent incremental Newton method
- The incremental Gauss-Newton algorithm with adaptive stepsize rule
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Why random reshuffling beats stochastic gradient descent
- Global convergence rate of proximal incremental aggregated gradient methods
Cited In (14)
- Accelerating incremental gradient optimization with curvature information
- Rate of convergence of a generalization of Newton's method
- A globally convergent incremental Newton method
- Convergence rate of the gradient descent method with dilatation of the space
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem
- IQN: an incremental quasi-Newton method with local superlinear convergence rate
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality
- Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization
- Why random reshuffling beats stochastic gradient descent
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Title not available (Why is that?)
- Title not available (Why is that?)
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator
Uses Software
This page was built for publication: Convergence rate of incremental gradient and incremental Newton methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5237308)