Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
From MaRDI portal
Publication:2979341
Abstract: We investigate the convergence rate of the recently proposed subgradient-push method for distributed optimization over time-varying directed graphs. The subgradient-push method can be implemented in a distributed way without requiring knowledge of either the number of agents or the graph sequence; each node is only required to know its out-degree at each time. Our main result is a convergence rate of for strongly convex functions with Lipschitz gradients even if only stochastic gradient samples are available; this is asymptotically faster than the rate previously known for (general) convex functions.
Cited in
(35)- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- Multi-agent based optimal equilibrium selection with resilience constraints for traffic flow
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs
- Cooperative convex optimization with subgradient delays using push-sum distributed dual averaging
- Multi-agent flocking control with complex obstacles and adaptive distributed convex optimization
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- Gradient-free algorithms for distributed online convex optimization
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- On the linear convergence of two decentralized algorithms
- Optimal convergence rates for convex distributed optimization in networks
- Distributed optimization for multi-agent system over unbalanced graphs with linear convergence rate.
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- Distributed stochastic gradient tracking methods
- Decentralized consensus algorithm with delayed and stochastic gradients
- An accelerated distributed online gradient push-sum algorithm on time-varying directed networks
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Confidence region for distributed stochastic optimization problem via stochastic gradient tracking method
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Distributed constrained optimization for multi-agent networks with communication delays under time-varying topologies
- A differentially private distributed optimization method for constrained optimization
- Optimal distributed stochastic mirror descent for strongly convex optimization
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- An improved distributed gradient-push algorithm for bandwidth resource allocation over wireless local area network
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Snake: A Stochastic Proximal Gradient Algorithm for Regularized Problems Over Large Graphs
- Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- A distributed ADMM-like method for resource sharing over time-varying networks
This page was built for publication: Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2979341)