Asynchronous Decentralized Successive Convex Approximation

From MaRDI portal
Publication:6325781

arXiv1909.10144MaRDI QIDQ6325781FDOQ6325781

Ye Tian, Ying Sun, Gesualdo Scutari

Publication date: 22 September 2019

Abstract: We study decentralized asynchronous multiagent optimization over networks, modeled as static (possibly directed) graphs. The optimization problem consists of minimizing a (possibly nonconvex) smooth function--the sum of the agents' local costs--plus a convex (possibly nonsmooth) regularizer, subject to convex constraints. Agents can perform their local computations as well as communicate with their immediate neighbors at any time, without any form of coordination or centralized scheduling; furthermore, when solving their local subproblems, they can use outdated information from their neighbors. We propose the first distributed asynchronous algorithm, termed ASY-DSCA, that converges at an R-linear rate to the optimal solution of convex problems whose objective function satisfies a general error bound condition; this condition is weaker than the more frequently used strong convexity, and it is satisfied by several empirical risk functions that are not strongly convex; examples include LASSO and logistic regression problems. When the objective function is nonconvex, ASY-DSCA converges to a stationary solution of the problem at a sublinear rate.













This page was built for publication: Asynchronous Decentralized Successive Convex Approximation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6325781)