On the linear convergence of two decentralized algorithms
From MaRDI portal
Publication:2032033
DOI10.1007/s10957-021-01833-yOpenAlexW3134851553MaRDI QIDQ2032033
Publication date: 15 June 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.07225
Uses Software
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Distributed stochastic subgradient projection algorithms for convex optimization
- Discrete-time dynamic average consensus
- Introductory lectures on convex optimization. A basic course.
- Distributed optimization over networks
- On the Convergence of Decentralized Gradient Descent
- Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
- Fast Distributed Gradient Methods
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Stochastic Proximal Gradient Consensus Over Random Networks
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Exact Diffusion for Distributed Optimization and Learning—Part II: Convergence Analysis
- Harnessing Smoothness to Accelerate Distributed Optimization
- Extrapush for Convex Smooth Decentralized Optimization Over Directed Networks
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Asynchronous Broadcast-Based Convex Optimization Over a Network