On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems

From MaRDI portal
Publication:1689375

DOI10.1016/J.AUTOMATICA.2015.02.038zbMATH Open1378.90065arXiv1406.3720OpenAlexW2003085133MaRDI QIDQ1689375FDOQ1689375


Authors: Ion Necoara, Valentin Nedelcu Edit this on Wikidata


Publication date: 12 January 2018

Published in: Automatica (Search for Journal in Brave)

Abstract: In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.


Full work available at URL: https://arxiv.org/abs/1406.3720




Recommendations




Cites Work


Cited In (23)





This page was built for publication: On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1689375)