On the Convergence Rate of Incremental Aggregated Gradient Algorithms

From MaRDI portal
Revision as of 21:29, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5266533


DOI10.1137/15M1049695zbMath1366.90195arXiv1506.02081MaRDI QIDQ5266533

Pablo A. Parrilo, Mert Gürbüzbalaban, Asuman Ozdaglar

Publication date: 16 June 2017

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1506.02081


90C25: Convex programming

90C06: Large-scale problems in mathematical programming

90C30: Nonlinear programming


Related Items

Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Distributed Deterministic Asynchronous Algorithms in Time-Varying Graphs Through Dykstra Splitting, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Optimization Methods for Large-Scale Machine Learning, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning, On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis, Convergence Rate of Incremental Gradient and Incremental Newton Methods, IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate, Linear convergence of proximal incremental aggregated gradient method for nonconvex nonsmooth minimization problems, A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes, An asynchronous subgradient-proximal method for solving additive convex optimization problems, A distributed proximal gradient method with time-varying delays for solving additive convex optimizations, Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems, Heavy-ball-based hard thresholding algorithms for sparse signal recovery, Convergence rates of subgradient methods for quasi-convex optimization problems, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Incremental without replacement sampling in nonconvex optimization, Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization, Fully asynchronous policy evaluation in distributed reinforcement learning over networks, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, Accelerating incremental gradient optimization with curvature information, Linear convergence of cyclic SAGA, An incremental aggregated proximal ADMM for linearly constrained nonconvex optimization with application to sparse logistic regression problems, Primal-dual incremental gradient method for nonsmooth and convex optimization problems, Communication-efficient algorithms for decentralized and stochastic optimization, An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization



Cites Work