On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
From MaRDI portal
Publication:1689375
DOI10.1016/J.AUTOMATICA.2015.02.038zbMATH Open1378.90065arXiv1406.3720OpenAlexW2003085133MaRDI QIDQ1689375FDOQ1689375
Authors: Ion Necoara, Valentin Nedelcu
Publication date: 12 January 2018
Published in: Automatica (Search for Journal in Brave)
Abstract: In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.
Full work available at URL: https://arxiv.org/abs/1406.3720
Recommendations
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Dual gradient method for linearly constrained, strongly convex, separable mathematical programming problems
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- A fast dual gradient method for separable convex optimization via smoothing
dual decompositionerror boundlinear convergencedistributed gradient algorithmseparable convex problems
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
- An Accelerated Dual Gradient-Projection Algorithm for Embedded Linear Model Predictive Control
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- Bounds for error in the solution set of a perturbed linear program
- Accelerated gradient methods and dual decomposition in distributed model predictive control
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- A class of distributed optimization methods with event-triggered communication
- Iteration complexity analysis of dual first-order methods for conic convex programming
Cited In (23)
- Fully distributed algorithms for convex optimization problems
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems
- Distributed model predictive control -- recursive feasibility under inexact dual optimization
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Linear Convergence of ADMM Under Metric Subregularity for Distributed Optimization
- Complexity certifications of first-order inexact Lagrangian methods for general convex programming: application to real-time MPC
- Proximal ADMM for nonconvex and nonsmooth optimization
- Decentralized convex optimization under affine constraints for power systems control
- Iteration complexity analysis of dual first-order methods for conic convex programming
- Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches
- Computational complexity certification for dual gradient method: application to embedded MPC
- A dual approach for optimal algorithms in distributed optimization over networks
- Distributed dual subgradient methods with averaging and applications to grid optimization
- Tracking-ADMM for distributed constraint-coupled optimization
- Metric selection in fast dual forward-backward splitting
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Distributed sub-optimal resource allocation over weight-balanced graph via singular perturbation
- Dual gradient method for linearly constrained, strongly convex, separable mathematical programming problems
- Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling
This page was built for publication: On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1689375)