Communication-efficient algorithms for decentralized and stochastic optimization
DOI10.1007/S10107-018-1355-4zbMATH Open1437.90125arXiv1701.03961OpenAlexW2963855576WikidataQ128829704 ScholiaQ128829704MaRDI QIDQ2297648FDOQ2297648
Authors: Yanyan Li
Publication date: 20 February 2020
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1701.03961
Recommendations
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- A randomized incremental primal-dual method for decentralized consensus optimization
- Optimal convergence rates for convex distributed optimization in networks
- Decentralized online convex optimization with compressed communications
- On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
complexitystochastic programmingdecentralized optimizationprimal-dual methodnonsmooth functionscommunication efficientdecentralized machine learning
Convex programming (90C25) Numerical methods based on nonlinear programming (49M37) Stochastic programming (90C15) Decentralized systems (93A14)
Cites Work
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- Title not available (Why is that?)
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Title not available (Why is that?)
- Coordination of groups of mobile autonomous agents using nearest neighbor rules
- Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- An optimal method for stochastic composite optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Distributed Subgradient Methods for Multi-Agent Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Fast Distributed Gradient Methods
- Title not available (Why is that?)
- On Distributed Convex Optimization Under Inequality and Equality Constraints
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed stochastic subgradient projection algorithms for convex optimization
- Validation analysis of mirror descent stochastic approximation method
- Title not available (Why is that?)
- Incremental proximal methods for large scale convex optimization
- Gradient sliding for composite optimization
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Asynchronous Broadcast-Based Convex Optimization Over a Network
- Distributed Optimization Over Time-Varying Directed Graphs
- Distributed asynchronous incremental subgradient methods
- Incremental stochastic subgradient algorithms for convex optimization
- An Accelerated Linearized Alternating Direction Method of Multipliers
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- An optimal randomized incremental gradient method
- Convergence Rate of Distributed ADMM Over Networks
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization
- Stochastic Proximal Gradient Consensus Over Random Networks
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
Cited In (32)
- Decentralized online convex optimization with compressed communications
- Distributed Decoding From Heterogeneous 1-Bit Compressive Measurements
- Decentralized multi-agent optimization based on a penalty method
- Exact penalties for decomposable convex optimization problems
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- A review of decentralized optimization focused on information flows of decomposition algorithms
- Title not available (Why is that?)
- Distributed stochastic gradient tracking methods
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- On the Divergence of Decentralized Nonconvex Optimization
- Distributed multi-agent optimisation via coordination with second-order nearest neighbours
- Balancing Communication and Computation in Distributed Optimization
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Distributed Sparse Composite Quantile Regression in Ultrahigh Dimensions
- Recent theoretical advances in decentralized distributed convex optimization
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Incremental without replacement sampling in nonconvex optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Distributed communication-sliding mirror-descent algorithm for nonsmooth resource allocation problem
- Adaptive consensus: a network pruning approach for decentralized optimization
- The communication complexity for decentralized evaluation of functions
- A randomized incremental primal-dual method for decentralized consensus optimization
- Revisiting EXTRA for Smooth Distributed Optimization
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Optimal Methods for Convex Risk-Averse Distributed Optimization
- Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions
- Distributed aggregative optimization with quantized communication
- On the communication complexity of Lipschitzian optimization for the coordinated model of computation
- Title not available (Why is that?)
- More communication-efficient distributed sparse learning
- Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity
This page was built for publication: Communication-efficient algorithms for decentralized and stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2297648)