Convergence Rate of Distributed ADMM Over Networks
From MaRDI portal
Publication:4566904
DOI10.1109/TAC.2017.2677879zbMATH Open1390.90551arXiv1601.00194OpenAlexW2962782955MaRDI QIDQ4566904FDOQ4566904
Authors: Ali Makhdoumi, Asuman Ozdaglar
Publication date: 27 June 2018
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We propose a distributed algorithm based on Alternating Direction Method of Multipliers (ADMM) to minimize the sum of locally known convex functions using communication over a network. This optimization problem emerges in many applications in distributed machine learning and statistical estimation. We show that when functions are convex, both the objective function values and the feasibility violation converge with rate , where is the number of iterations. We then show that if the functions are strongly convex and have Lipschitz continuous gradients, the sequence generated by our algorithm converges linearly to the optimal solution. In particular, an -optimal solution can be computed with iterations, where is the condition number of the problem. Our analysis also highlights the effect of network structure on the convergence rate through maximum and minimum degree of nodes as well as the algebraic connectivity of the network.
Full work available at URL: https://arxiv.org/abs/1601.00194
Cited In (21)
- Communication-efficient algorithms for decentralized and stochastic optimization
- Distributed event-triggered algorithm for unconstrained convex optimisation over weight-balanced directed networks
- Distributed optimization over directed graphs with row stochasticity and constraint regularity
- Network-decentralised optimisation and control: an explicit saturated solution
- Augmented Lagrange algorithms for distributed optimization over multi-agent networks via edge-based method
- A local-minimization-free zero-gradient-sum algorithm for distributed optimization
- Decentralized consensus algorithm with delayed and stochastic gradients
- Distributed subgradient-free stochastic optimization algorithm for nonsmooth convex functions over time-varying networks
- Is ADMM always faster than average consensus?
- Decentralized optimization over tree graphs
- A novel bound on the convergence rate of ADMM for distributed optimization
- Convergence testing on a distributed network of processors
- A distributed methodology for approximate uniform global minimum sharing
- Distributed stochastic variance reduced gradient methods by sampling extra data with replacement
- Distributed decision-coupled constrained optimization via proximal-tracking
- Distributed optimization with information-constrained population dynamics
- Tracking-ADMM for distributed constraint-coupled optimization
- Solving Fused Penalty Estimation Problems via Block Splitting Algorithms
- Revisiting EXTRA for Smooth Distributed Optimization
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- A divide-and-conquer algorithm for distributed optimization on networks
This page was built for publication: Convergence Rate of Distributed ADMM Over Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4566904)