A unitary distributed subgradient method for multi-agent optimization with different coupling sources
From MaRDI portal
Abstract: In this work, we first consider distributed convex constrained optimization problems where the objective function is encoded by multiple local and possibly nonsmooth objectives privately held by a group of agents, and propose a distributed subgradient method with double averaging (abbreviated as ) that only requires peer-to-peer communication and local computation to solve the global problem. The algorithmic framework builds on dual methods and dynamic average consensus; the sequence of test points is formed by iteratively minimizing a local dual model of the overall objective where the coefficients, i.e., approximated subgradients of the objective, are supplied by the dynamic average consensus scheme. We theoretically show that enjoys non-ergodic convergence properties, i.e., the local minimizing sequence itself is convergent, a distinct feature that cannot be found in existing results. Specifically, we establish a convergence rate of in terms of objective function error. Then, extensions are made to tackle distributed optimization problems with coupled functional constraints by combining and dual decomposition. This is made possible by Lagrangian relaxation that transforms the coupling in constraints of the primal problem into that in cost functions of the dual, thus allowing us to solve the dual problem via . Both the dual objective error and the quadratic penalty for the coupled constraint are proved to converge at a rate of , and the primal objective error asymptotically vanishes. Numerical experiments and comparisons are conducted to illustrate the advantage of the proposed algorithms and validate our theoretical findings.
Recommendations
- Distributed constrained optimization for multi-agent networks with nonsmooth objective functions
- Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Distributed subgradient method for multi-agent optimization with quantized communication
- Dual decomposition for multi-agent distributed optimization with coupling constraints
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- A Bregman Splitting Scheme for Distributed Optimization Over Networks
- Accelerated Distributed MPC of Linear Discrete-Time Systems With Coupled Constraints
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- An augmented Lagrangian method for distributed optimization
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Collective dynamics of `small-world' networks
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Decomposition techniques in mathematical programming. Engineering and science applications.
- Discrete-time dynamic average consensus
- Distributed Online Optimization in Dynamic Environments Using Mirror Descent
- Distributed Saddle-Point Subgradient Algorithms With Laplacian Averaging
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed average consensus with least-mean-square deviation
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Dual decomposition for multi-agent distributed optimization with coupling constraints
- Dual subgradient method with averaging for optimal resource allocation
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Newton-Raphson Consensus for Distributed Convex Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- On the convergence of decentralized gradient descent
- Online Distributed Convex Optimization on Dynamic Networks
- Primal convergence from dual subgradient methods for convex optimization
- Primal recovery from consensus-based dual decomposition for distributed convex optimization
- Quasi-monotone subgradient methods for nonsmooth convex minimization
Cited in
(4)- Robust consensus control of nonlinear multi‐agent systems based on convergence rate estimation
- A study on distributed optimization over large-scale networked systems
- Cooperative distributed multi-agent optimization
- A classification of methods for distributed system optimization based on formulation structure
This page was built for publication: A unitary distributed subgradient method for multi-agent optimization with different coupling sources
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2174029)