Distributed stochastic gradient tracking methods
From MaRDI portal
Abstract: In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a distributed stochastic gradient tracking method (DSGT) and a gossip-like stochastic gradient tracking method (GSGT). We show that, in expectation, the iterates generated by each agent are attracted to a neighborhood of the optimal solution, where they accumulate exponentially fast (under a constant stepsize choice). Under DSGT, the limiting (expected) error bounds on the distance of the iterates from the optimal solution decrease with the network size , which is a comparable performance to a centralized stochastic gradient algorithm. Moreover, we show that when the network is well-connected, GSGT incurs lower communication cost than DSGT while maintaining a similar computational cost. Numerical example further demonstrates the effectiveness of the proposed methods.
Recommendations
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- Distributed consensus-based multi-agent convex optimization via gradient tracking technique
- Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms
- Stochastic mirror descent method for distributed multi-agent optimization
- Distributed gradient tracking methods with finite data rates
Cites work
- scientific article; zbMATH DE number 5454133 (Why is no real title available?)
- scientific article; zbMATH DE number 47926 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- A Distributed Algorithm for Convex Constrained Optimization Under Noise
- A Stochastic Approximation Method
- A flocking-based approach for distributed stochastic optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Adaptive Penalty-Based Distributed Stochastic Convex Optimization
- An augmented Lagrangian method for distributed optimization
- Asynchronous Gossip-Based Random Projection Algorithms Over Networks
- Communication-efficient algorithms for decentralized and stochastic optimization
- Constrained Consensus and Optimization in Multi-Agent Networks
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- Decentralized consensus algorithm with delayed and stochastic gradients
- Design and analysis of simulation experiments
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- Distributed Learning Algorithms for Spectrum Sharing in Spatial Random Access Wireless Networks
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Distributed Recursive Least-Squares: Stability and Performance Analysis
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
- Distributed multi-agent optimization with state-dependent communication
- Distributed stochastic subgradient projection algorithms for convex optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Fast Distributed Gradient Methods
- Gossip Algorithms for Convex Consensus Optimization Over Networks
- Harnessing Smoothness to Accelerate Distributed Optimization
- Multi-fidelity optimization via surrogate modelling
- Noise Reduction by Swarming in Social Foraging
- Nonlinear gossip
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- On the Learning Behavior of Adaptive Networks—Part I: Transient Analysis
- On the Learning Behavior of Adaptive Networks—Part II: Performance Analysis
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Robust Stochastic Approximation Approach to Stochastic Programming
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
- Swarming for faster convergence in stochastic optimization
Cited in
(36)- Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- An accelerated distributed gradient method with local memory
- Distributed Personalized Gradient Tracking With Convex Parametric Models
- scientific article; zbMATH DE number 7365721 (Why is no real title available?)
- Convergence of distributed gradient-tracking-based optimization algorithms with random graphs
- A unified momentum-based paradigm of decentralized SGD for non-convex models and heterogeneous data
- Distributed predefined-time constrained social cost minimization problem under the partial information setting
- scientific article; zbMATH DE number 7307473 (Why is no real title available?)
- Stochastic mirror descent for convex optimization with consensus constraints
- Computation of exact gradients in distributed dynamic systems
- Optimal gradient tracking for decentralized optimization
- Event-triggered primal-dual design with linear convergence for distributed nonstrongly convex optimization
- Distributed stochastic compositional optimization problems over directed networks
- Incremental without replacement sampling in nonconvex optimization
- Recent theoretical advances in decentralized distributed convex optimization
- Distributed projection‐free algorithm for constrained aggregative optimization
- Improving the Transient Times for Distributed Stochastic Gradient Methods
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Gradient-Tracking-Based Distributed Optimization With Guaranteed Optimality Under Noisy Information Sharing
- Tracking-ADMM for distributed constraint-coupled optimization
- Fast decentralized nonconvex finite-sum optimization with recursive variance reduction
- Gradient-tracking based differentially private distributed optimization with enhanced optimization accuracy
- Distributed gradient tracking methods with finite data rates
- An event-triggering algorithm for decentralized stochastic optimization over networks
- Exact spectral-like gradient method for distributed optimization
- Balancing communication and computation in gradient tracking algorithms for decentralized optimization
- Projected subgradient based distributed convex optimization with transmission noises
- Generalized multi-cluster game under partial-decision information with applications to management of energy Internet
- Convergence results of a nested decentralized gradient method for non-strongly convex problems
- An accelerated decentralized stochastic optimization algorithm with inexact model
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
- Zeroth-order feedback optimization for cooperative multi-agent systems
- Confidence region for distributed stochastic optimization problem via stochastic gradient tracking method
- Distributed constrained optimization with periodic dynamic quantization
This page was built for publication: Distributed stochastic gradient tracking methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2020611)