Communication-efficient algorithms for decentralized and stochastic optimization

From MaRDI portal
Publication:2297648

DOI10.1007/S10107-018-1355-4zbMATH Open1437.90125arXiv1701.03961OpenAlexW2963855576WikidataQ128829704 ScholiaQ128829704MaRDI QIDQ2297648FDOQ2297648


Authors: Yanyan Li Edit this on Wikidata


Publication date: 20 February 2020

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralized primal-dual method which can find an epsilon-solution both in terms of functional optimality gap and feasibility residual in O(1/epsilon) inter-node communication rounds when the objective functions are convex and the local primal subproblems are solved exactly. Our major contribution is to present a new class of decentralized primal-dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can still find an epsilon-solution in O(1/epsilon) (resp., O(1/sqrtepsilon)) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the O(1/epsilon2) (resp., O(1/epsilon)) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization.


Full work available at URL: https://arxiv.org/abs/1701.03961




Recommendations




Cites Work


Cited In (32)





This page was built for publication: Communication-efficient algorithms for decentralized and stochastic optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297648)