Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm andConvergence Analysis
From MaRDI portal
Publication:4619615
DOI10.1109/TSP.2016.2537271zbMATH Open1414.94106arXiv1509.02597OpenAlexW2155723734MaRDI QIDQ4619615FDOQ4619615
Authors: Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao, Xiangfeng Wang
Publication date: 7 February 2019
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: Aiming at solving large-scale learning problems, this paper studies distributed optimization methods based on the alternating direction method of multipliers (ADMM). By formulating the learning problem as a consensus problem, the ADMM can be used to solve the consensus problem in a fully parallel fashion over a computer network with a star topology. However, traditional synchronized computation does not scale well with the problem size, as the speed of the algorithm is limited by the slowest workers. This is particularly true in a heterogeneous network where the computing nodes experience different computation and communication delays. In this paper, we propose an asynchronous distributed ADMM (AD-AMM) which can effectively improve the time efficiency of distributed optimization. Our main interest lies in analyzing the convergence conditions of the AD-ADMM, under the popular partially asynchronous model, which is defined based on a maximum tolerable delay of the network. Specifically, by considering general and possibly non-convex cost functions, we show that the AD-ADMM is guaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points as long as the algorithm parameters are chosen appropriately according to the network delay. We further illustrate that the asynchrony of the ADMM has to be handled with care, as slightly modifying the implementation of the AD-ADMM can jeopardize the algorithm convergence, even under a standard convex setting.
Full work available at URL: https://arxiv.org/abs/1509.02597
Cited In (23)
- An inertial parallel and asynchronous forward-backward iteration for distributed convex optimization
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs
- Hybrid MPI/OpenMP parallel asynchronous distributed alternating direction method of multipliers
- Asynchronous ADMM for nonlinear continuous-time systems
- Linear Convergence of ADMM Under Metric Subregularity for Distributed Optimization
- Decentralized consensus algorithm with delayed and stochastic gradients
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Alternating iterative methods for solving tensor equations with applications
- Partition-based multi-agent optimization in the presence of lossy and asynchronous communication
- A simplified form of block-iterative operator splitting and an asynchronous algorithm resembling the multi-block alternating direction method of multipliers
- A novel bound on the convergence rate of ADMM for distributed optimization
- Asynchronous Distributed Optimization Over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence
- A fully distributed asynchronous approach for multi-area coordinated network-constrained unit commitment
- A generalized alternating direction implicit method for consensus optimization: application to distributed sparse logistic regression
- Synchronous distributed ADMM for consensus convex optimization problems with self-loops
- Asynchronous Distributed ADMM for Large-Scale Optimization—Part II: Linear Convergence Analysis and Numerical Performance
- A Distributed, Asynchronous, and Incremental Algorithm for Nonconvex Optimization: An ADMM Approach
- Distributed optimization with information-constrained population dynamics
- A fast diagonal distance metric learning approach for large-scale datasets
- Exponential convergence of a distributed algorithm for solving linear algebraic equations
- Primal-dual algorithms for multi-agent structured optimization over message-passing architectures with bounded communication delays
- An Uncertainty-Weighted Asynchronous ADMM Method for Parallel PDE Parameter Estimation
This page was built for publication: Asynchronous Distributed ADMM for Large-Scale Optimization—Part I: Algorithm and<?Pub _newline ?>Convergence Analysis
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4619615)