Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
From MaRDI portal
Publication:2979341
DOI10.1109/TAC.2016.2529285zbMATH Open1359.90142arXiv1406.2075OpenAlexW2963156126MaRDI QIDQ2979341FDOQ2979341
Authors: Angelia Nedić, Alex Olshevsky
Publication date: 3 May 2017
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We investigate the convergence rate of the recently proposed subgradient-push method for distributed optimization over time-varying directed graphs. The subgradient-push method can be implemented in a distributed way without requiring knowledge of either the number of agents or the graph sequence; each node is only required to know its out-degree at each time. Our main result is a convergence rate of for strongly convex functions with Lipschitz gradients even if only stochastic gradient samples are available; this is asymptotically faster than the rate previously known for (general) convex functions.
Full work available at URL: https://arxiv.org/abs/1406.2075
Convex programming (90C25) Programming involving graphs or networks (90C35) Distributed algorithms (68W15)
Cited In (35)
- An adaptive online learning algorithm for distributed convex optimization with coupled constraints over unbalanced directed graphs
- Multi-agent based optimal equilibrium selection with resilience constraints for traffic flow
- Distributed constrained stochastic subgradient algorithms based on random projection and asynchronous broadcast over networks
- An improved distributed gradient-push algorithm for bandwidth resource allocation over wireless local area network
- An accelerated distributed online gradient push-sum algorithm on time-varying directed networks
- Snake: A Stochastic Proximal Gradient Algorithm for Regularized Problems Over Large Graphs
- Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs
- Gradient-free algorithms for distributed online convex optimization
- A differentially private distributed optimization method for constrained optimization
- Decentralized consensus algorithm with delayed and stochastic gradients
- Distributed heterogeneous multi-agent optimization with stochastic sub-gradient
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Distributed optimization for multi-agent system over unbalanced graphs with linear convergence rate.
- Distributed stochastic gradient tracking methods
- Distributed stochastic optimization algorithm with non-consistent constraints in time-varying unbalanced networks
- On the linear convergence of two decentralized algorithms
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Multi-agent flocking control with complex obstacles and adaptive distributed convex optimization
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization
- Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization
- On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints
- Cooperative convex optimization with subgradient delays using push-sum distributed dual averaging
- Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizes
- Convergence rate analysis of distributed optimization with projected subgradient algorithm
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs
- Regularized dual gradient distributed method for constrained convex optimization over unbalanced directed graphs
- Optimal convergence rates for convex distributed optimization in networks
- A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks
- Confidence region for distributed stochastic optimization problem via stochastic gradient tracking method
- Distributed constrained optimization for multi-agent networks with communication delays under time-varying topologies
- Optimal distributed stochastic mirror descent for strongly convex optimization
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
- A distributed ADMM-like method for resource sharing over time-varying networks
This page was built for publication: Stochastic Gradient-Push for Strongly Convex Functions on Time-Varying Directed Graphs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2979341)