A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

From MaRDI portal
Publication:6285940

DOI10.1109/TSP.2019.2926022arXiv1704.07807WikidataQ127539842 ScholiaQ127539842MaRDI QIDQ6285940FDOQ6285940


Authors: Wei Shi, Ming Yan Edit this on Wikidata


Publication date: 25 April 2017

Abstract: This paper proposes a novel proximal-gradient algorithm for a decentralized optimization problem with a composite objective containing smooth and non-smooth terms. Specifically, the smooth and nonsmooth terms are dealt with by gradient and proximal updates, respectively. The proposed algorithm is closely related to a previous algorithm, PG-EXTRA cite{shi2015proximal}, but has a few advantages. First of all, agents use uncoordinated step-sizes, and the stable upper bounds on step-sizes are independent of network topologies. The step-sizes depend on local objective functions, and they can be as large as those of the gradient descent. Secondly, for the special case without non-smooth terms, linear convergence can be achieved under the strong convexity assumption. The dependence of the convergence rate on the objective functions and the network are separated, and the convergence rate of the new algorithm is as good as one of the two convergence rates that match the typical rates for the general gradient descent and the consensus averaging. We provide numerical experiments to demonstrate the efficacy of the introduced algorithm and validate our theoretical discoveries.













This page was built for publication: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6285940)