Achieving Linear Convergence in Distributed Asynchronous Multi-agent Optimization

From MaRDI portal
Publication:6299667

DOI10.1109/TAC.2020.2977940arXiv1803.10359MaRDI QIDQ6299667FDOQ6299667


Authors: Ye Tian, Ying Sun, Gesualdo Scutari Edit this on Wikidata


Publication date: 27 March 2018

Abstract: This papers studies multi-agent (convex and emph{nonconvex}) optimization over static digraphs. We propose a general distributed emph{asynchronous} algorithmic framework whereby i) agents can update their local variables as well as communicate with their neighbors at any time, without any form of coordination; and ii) they can perform their local computations using (possibly) delayed, out-of-sync information from the other agents. Delays need not be known to the agent or obey any specific profile, and can also be time-varying (but bounded). The algorithm builds on a tracking mechanism that is robust against asynchrony (in the above sense), whose goal is to estimate locally the average of agents' gradients. When applied to strongly convex functions, we prove that it converges at an R-linear (geometric) rate as long as the step-size is {sufficiently small}. A sublinear convergence rate is proved, when nonconvex problems and/or diminishing, {it uncoordinated} step-sizes are considered. To the best of our knowledge, this is the first distributed algorithm with provable geometric convergence rate in such a general asynchronous setting. Preliminary numerical results demonstrate the efficacy of the proposed algorithm and validate our theoretical findings.













This page was built for publication: Achieving Linear Convergence in Distributed Asynchronous Multi-agent Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6299667)