On the Convergence Rate of Incremental Aggregated Gradient Algorithms

From MaRDI portal
Publication:5266533

DOI10.1137/15M1049695zbMATH Open1366.90195arXiv1506.02081OpenAlexW3104398353MaRDI QIDQ5266533FDOQ5266533

Pablo A. Parrilo, Mert Gürbüzbalaban, Asuman Ozdaglar

Publication date: 16 June 2017

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: Motivated by applications to distributed optimization over networks and large-scale data processing in machine learning, we analyze the deterministic incremental aggregated gradient method for minimizing a finite sum of smooth functions where the sum is strongly convex. This method processes the functions one at a time in a deterministic order and incorporates a memory of previous gradient values to accelerate convergence. Empirically it performs well in practice; however, no theoretical analysis with explicit rate results was previously given in the literature to our knowledge, in particular most of the recent efforts concentrated on the randomized versions. In this paper, we show that this deterministic algorithm has global linear convergence and characterize the convergence rate. We also consider an aggregated method with momentum and demonstrate its linear convergence. Our proofs rely on a careful choice of a Lyapunov function that offers insight into the algorithm's behavior and simplifies the proofs considerably.


Full work available at URL: https://arxiv.org/abs/1506.02081





Cites Work


Cited In (37)


   Recommendations





This page was built for publication: On the Convergence Rate of Incremental Aggregated Gradient Algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5266533)