Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

From MaRDI portal
Publication:5352733

DOI10.1109/TAC.2011.2161027zbMATH Open1369.90156arXiv1005.2012OpenAlexW3101665129MaRDI QIDQ5352733FDOQ5352733

John C. Duchi, Alekh Agarwal, Martin J. Wainwright

Publication date: 8 September 2017

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.


Full work available at URL: https://arxiv.org/abs/1005.2012




Recommendations




Cited In (only showing first 100 items - show all)





This page was built for publication: Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5352733)