Multi-Agent Distributed Optimization via Inexact Consensus ADMM
From MaRDI portal
Publication:4579700
Abstract: Multi-agent distributed consensus optimization problems arise in many signal processing applications. Recently, the alternating direction method of multipliers (ADMM) has been used for solving this family of problems. ADMM based distributed optimization method is shown to have faster convergence rate compared with classic methods based on consensus subgradient, but can be computationally expensive, especially for problems with complicated structures or large dimensions. In this paper, we propose low-complexity algorithms that can reduce the overall computational cost of consensus ADMM by an order of magnitude for certain large-scale problems. Central to the proposed algorithms is the use of an inexact step for each ADMM update, which enables the agents to perform cheap computation at each iteration. Our convergence analyses show that the proposed methods converge well under some convexity assumptions. Numerical results show that the proposed algorithms offer considerably lower computational complexity than the standard ADMM based distributed optimization methods.
Cited in
(49)- Implementing the alternating direction method of multipliers for big datasets: a case study of least absolute shrinkage and selection operator
- Distributed convex optimization with coupling constraints over time-varying directed graphs
- A distributed ADMM-like method for resource sharing over time-varying networks
- A decentralized smoothing quadratic regularization algorithm for composite consensus optimization with non-Lipschitz singularities
- Sparse canonical correlation analysis algorithm with alternating direction method of multipliers
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Communication-efficient algorithms for decentralized and stochastic optimization
- Online supervised learning with distributed features over multiagent system
- Distributed convex optimization based on ADMM and belief propagation methods
- Differentially private distributed optimization for multi-agent systems via the augmented Lagrangian algorithm
- Distributed nonconvex constrained optimization over time-varying digraphs
- Distributed Model Predictive Control of linear discrete-time systems with local and global constraints
- Distributed Nash equilibrium seeking under partial-decision information via the alternating direction method of multipliers
- Distributed Nonconvex Optimization of Multiagent Systems Using Boosting Functions to Escape Local Optima
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- Supervised model predictive control of large‐scale electricity networks via clustering methods
- Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs
- A partially inexact ADMM with o(1/n) asymptotic convergence rate, 𝒪(1/n) complexity, and immediate relative error tolerance
- GADMM: fast and communication efficient framework for distributed machine learning
- Proximal ADMM for nonconvex and nonsmooth optimization
- Linear convergence of primal-dual gradient methods and their performance in distributed optimization
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Decentralized consensus algorithm with delayed and stochastic gradients
- Augmented Lagrangian optimization under fixed-point arithmetic
- A solution strategy for distributed uncertain economic dispatch problems via scenario theory
- Optimal gradient tracking for decentralized optimization
- On the linear convergence of two decentralized algorithms
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- A variance-reduced stochastic gradient tracking algorithm for decentralized optimization with orthogonality constraints
- A fast proximal gradient algorithm for decentralized composite optimization over directed networks
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Golden ratio proximal gradient ADMM for distributed composite convex optimization
- Distributed Robust Subspace Recovery
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- A generalized alternating direction implicit method for consensus optimization: application to distributed sparse logistic regression
- Tracking-ADMM for distributed constraint-coupled optimization
- A new look at distributed optimal output agreement of multi-agent systems
- Distributed Nash equilibrium learning: A second‐order proximal algorithm
- A block successive upper-bound minimization method of multipliers for linearly constrained convex optimization
- Distributed and consensus optimization for non-smooth image reconstruction
- A randomized incremental primal-dual method for decentralized consensus optimization
- A fully distributed ADMM-based dispatch approach for virtual power plant problems
- Proximal nested primal-dual gradient algorithms for distributed constraint-coupled composite optimization
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Zeroth-order feedback optimization for cooperative multi-agent systems
- A survey on some recent developments of alternating direction method of multipliers
- Distributed online semi-supervised support vector machine
- Decentralized Dynamic Optimization Through the Alternating Direction Method of Multipliers
This page was built for publication: Multi-Agent Distributed Optimization via Inexact Consensus ADMM
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4579700)