Communication-efficient algorithms for decentralized and stochastic optimization
From MaRDI portal
Abstract: We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralized primal-dual method which can find an -solution both in terms of functional optimality gap and feasibility residual in inter-node communication rounds when the objective functions are convex and the local primal subproblems are solved exactly. Our major contribution is to present a new class of decentralized primal-dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can still find an -solution in (resp., ) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the (resp., ) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization.
Recommendations
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- A randomized incremental primal-dual method for decentralized consensus optimization
- Optimal convergence rates for convex distributed optimization in networks
- Decentralized online convex optimization with compressed communications
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
Cites work
- scientific article; zbMATH DE number 3148887 (Why is no real title available?)
- scientific article; zbMATH DE number 3912096 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 194139 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- An accelerated linearized alternating direction method of multipliers
- An optimal method for stochastic composite optimization
- An optimal randomized incremental gradient method
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Convergence Rate of Distributed ADMM Over Networks
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed asynchronous incremental subgradient methods
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Distributed stochastic subgradient projection algorithms for convex optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Fast Distributed Gradient Methods
- Gradient sliding for composite optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Incremental proximal methods for large scale convex optimization
- Incremental stochastic subgradient algorithms for convex optimization
- Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal primal-dual methods for a class of saddle point problems
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic Proximal Gradient Consensus Over Random Networks
- Validation analysis of mirror descent stochastic approximation method
Cited in
(43)- Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
- DSA: decentralized double stochastic averaging gradient algorithm
- Decentralized online convex optimization with compressed communications
- Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements
- Decentralized multi-agent optimization based on a penalty method
- Operator splitting methods for decentralized optimization
- Exact penalties for decomposable convex optimization problems
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Decentralized Multi-Task Stochastic Optimization With Compressed Communications
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Distributed stochastic gradient tracking methods
- scientific article; zbMATH DE number 7307473 (Why is no real title available?)
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- Optimal gradient tracking for decentralized optimization
- Decentralized optimization over tree graphs
- On the linear convergence of two decentralized algorithms
- Decentralized ADMM with compressed and event-triggered communication
- On the Divergence of Decentralized Nonconvex Optimization
- Distributed multi-agent optimisation via coordination with second-order nearest neighbours
- Balancing Communication and Computation in Distributed Optimization
- Random gradient extrapolation for distributed and stochastic optimization
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Incremental without replacement sampling in nonconvex optimization
- Distributed Sparse Composite Quantile Regression in Ultrahigh Dimensions
- Recent theoretical advances in decentralized distributed convex optimization
- On decentralized nonsmooth optimization
- Distributed communication-sliding mirror-descent algorithm for nonsmooth resource allocation problem
- A dual approach for optimal algorithms in distributed optimization over networks
- Adaptive consensus: a network pruning approach for decentralized optimization
- Fast decentralized nonconvex finite-sum optimization with recursive variance reduction
- The communication complexity for decentralized evaluation of functions
- Optimal convergence rates for convex distributed optimization in networks
- Balancing communication and computation in gradient tracking algorithms for decentralized optimization
- A randomized incremental primal-dual method for decentralized consensus optimization
- Revisiting EXTRA for Smooth Distributed Optimization
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Optimal Methods for Convex Risk-Averse Distributed Optimization
- On the communication complexity of Lipschitzian optimization for the coordinated model of computation
- Distributed aggregative optimization with quantized communication
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- More communication-efficient distributed sparse learning
This page was built for publication: Communication-efficient algorithms for decentralized and stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297648)