On the Convergence of Inexact Gradient Descent with Controlled Synchronization Steps

From MaRDI portal
Publication:6408024

arXiv2208.07797MaRDI QIDQ6408024FDOQ6408024


Authors: Sandushan Ranaweera, Chathuranga Weeraddana, Prathapasinghe Dharmawansa, Carlo Fischione Edit this on Wikidata


Publication date: 16 August 2022

Abstract: We develop a gradient-like algorithm to minimize a sum of peer objective functions based on coordination through a peer interconnection network. The coordination admits two stages: the first is to constitute a gradient, possibly with errors, for updating locally replicated decision variables at each peer and the second is used for error-free averaging for synchronizing local replicas. Unlike many related algorithms, the errors permitted in our algorithm can cover a wide range of inexactnesses, as long as they are bounded. Moreover, we do not impose any gradient boundedness conditions for the objective functions. Furthermore, the second stage is not conducted in a periodic manner, like many related algorithms. Instead, a locally verifiable criterion is devised to dynamically trigger the peer-to-peer coordination at the second stage, so that expensive communication overhead for error-free averaging can significantly be reduced. Finally, the convergence of the algorithm is established under mild conditions.




Has companion code repository: https://github.com/Sandushan/inexact-gradient-descent









This page was built for publication: On the Convergence of Inexact Gradient Descent with Controlled Synchronization Steps

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6408024)