An Accelerated Method For Decentralized Distributed Stochastic Optimization Over Time-Varying Graphs

From MaRDI portal
Publication:6364010

arXiv2103.15598MaRDI QIDQ6364010FDOQ6364010


Authors: Alexander Rogozin, Mikhail Bochko, Pavel Dvurechensky, A. V. Gasnikov Edit this on Wikidata


Publication date: 29 March 2021

Abstract: We consider a distributed stochastic optimization problem that is solved by a decentralized network of agents with only local communication between neighboring agents. The goal of the whole system is to minimize a global objective function given as a sum of local objectives held by each agent. Each local objective is defined as an expectation of a convex smooth random function and the agent is allowed to sample stochastic gradients for this function. For this setting we propose the first accelerated (in the sense of Nesterov's acceleration) method that simultaneously attains optimal up to a logarithmic factor communication and oracle complexity bounds for smooth strongly convex distributed stochastic optimization. We also consider the case when the communication graph is allowed to vary with time and obtain complexity bounds for our algorithm, which are the first upper complexity bounds for this setting in the literature.













This page was built for publication: An Accelerated Method For Decentralized Distributed Stochastic Optimization Over Time-Varying Graphs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6364010)