Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
From MaRDI portal
Publication:2980596
DOI10.1109/TAC.2015.2448011zbMATH Open1359.90103arXiv1312.1085OpenAlexW4300028551MaRDI QIDQ2980596FDOQ2980596
Franck Iutzeler, Pascal Bianchi, Philippe Ciblat, W. Hachem
Publication date: 3 May 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: Consider a set of N agents seeking to solve distributively the minimization problem where the convex functions are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. We provide a general reformulation of the problem and obtain a class of distributed algorithms which encompass various network architectures. The rate of convergence of our method is considered. It is assumed that the infimum of the problem is reached at a point , the functions are twice differentiable at this point and in the positive definite ordering of symmetric matrices. With these assumptions, it is shown that the convergence to the consensus is linear and the exact rate is provided. Application examples where this rate can be optimized with respect to the ADMM free parameter are also given.
Full work available at URL: https://arxiv.org/abs/1312.1085
Cited In (18)
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- A distributed asynchronous method of multipliers for constrained nonconvex optimization
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- Is ADMM always faster than average consensus?
- Risk-averse stochastic programming and distributionally robust optimization via operator splitting
- Multi-cluster distributed optimization via random sleep strategy
- A novel bound on the convergence rate of ADMM for distributed optimization
- Tight global linear convergence rate bounds for Douglas-Rachford splitting
- A fully distributed asynchronous approach for multi-area coordinated network-constrained unit commitment
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Distributed decision-coupled constrained optimization via proximal-tracking
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
- Tracking-ADMM for distributed constraint-coupled optimization
- Distributed support vector machine in master-slave mode
- Revisiting EXTRA for Smooth Distributed Optimization
- Decentralized Consensus Algorithm with Delayed and Stochastic Gradients
- A generic online acceleration scheme for optimization algorithms via relaxation and inertia
This page was built for publication: Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2980596)