Fast Distributed Gradient Methods

From MaRDI portal
Publication:2983161

DOI10.1109/TAC.2014.2298712zbMATH Open1360.90292arXiv1112.2972MaRDI QIDQ2983161FDOQ2983161


Authors: Dušan Jakovetić, João Xavier, José M. F. Moura Edit this on Wikidata


Publication date: 16 May 2017

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: We study distributed optimization problems when N nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant L), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications mathcalK and the per-node gradient evaluations k. Our first method, Distributed Nesterov Gradient, achieves rates Oleft(logmathcalK/mathcalKight) and Oleft(logk/kight). Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of L and mu(W) -- the second largest singular value of the NimesN doubly stochastic weight matrix W. It achieves rates Oleft(1/mathcalK2xiight) and Oleft(1/k2ight) (xi>0 arbitrarily small). Further, we give with both methods explicit dependence of the convergence constants on N and W. Simulation examples illustrate our findings.


Full work available at URL: https://arxiv.org/abs/1112.2972







Cited In (88)





This page was built for publication: Fast Distributed Gradient Methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2983161)