Multi-Agent Distributed Optimization via Inexact Consensus ADMM

From MaRDI portal
Publication:4579700

DOI10.1109/TSP.2014.2367458zbMATH Open1393.90124arXiv1402.6065OpenAlexW2092620240MaRDI QIDQ4579700FDOQ4579700


Authors: Tsung-Hui Chang, Mingyi Hong, Xiangfeng Wang Edit this on Wikidata


Publication date: 22 August 2018

Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)

Abstract: Multi-agent distributed consensus optimization problems arise in many signal processing applications. Recently, the alternating direction method of multipliers (ADMM) has been used for solving this family of problems. ADMM based distributed optimization method is shown to have faster convergence rate compared with classic methods based on consensus subgradient, but can be computationally expensive, especially for problems with complicated structures or large dimensions. In this paper, we propose low-complexity algorithms that can reduce the overall computational cost of consensus ADMM by an order of magnitude for certain large-scale problems. Central to the proposed algorithms is the use of an inexact step for each ADMM update, which enables the agents to perform cheap computation at each iteration. Our convergence analyses show that the proposed methods converge well under some convexity assumptions. Numerical results show that the proposed algorithms offer considerably lower computational complexity than the standard ADMM based distributed optimization methods.


Full work available at URL: https://arxiv.org/abs/1402.6065







Cited In (49)





This page was built for publication: Multi-Agent Distributed Optimization via Inexact Consensus ADMM

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4579700)