On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
From MaRDI portal
Publication:1689375
Abstract: In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.
Recommendations
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Dual gradient method for linearly constrained, strongly convex, separable mathematical programming problems
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- A fast dual gradient method for separable convex optimization via smoothing
Cites work
- A class of distributed optimization methods with event-triggered communication
- Accelerated gradient methods and dual decomposition in distributed model predictive control
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- An Accelerated Dual Gradient-Projection Algorithm for Embedded Linear Model Predictive Control
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Bounds for error in the solution set of a perturbed linear program
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Introductory lectures on convex optimization. A basic course.
- Iteration complexity analysis of dual first-order methods for conic convex programming
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
Cited in
(25)- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Tracking-ADMM for distributed constraint-coupled optimization
- Dual gradient method for linearly constrained, strongly convex, separable mathematical programming problems
- Fully distributed algorithms for convex optimization problems
- Distributed dual subgradient methods with averaging and applications to grid optimization
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling
- Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems
- Distributed sub-optimal resource allocation over weight-balanced graph via singular perturbation
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Computational complexity certification for dual gradient method: application to embedded MPC
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Proximal ADMM for nonconvex and nonsmooth optimization
- Linear Convergence of ADMM Under Metric Subregularity for Distributed Optimization
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- Complexity certifications of first-order inexact Lagrangian methods for general convex programming: application to real-time MPC
- Iteration complexity analysis of dual first-order methods for conic convex programming
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- Metric selection in fast dual forward-backward splitting
- Distributed model predictive control -- recursive feasibility under inexact dual optimization
- On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Decentralized convex optimization under affine constraints for power systems control
- Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches
This page was built for publication: On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1689375)