An Optimal Algorithm for Decentralized Finite-Sum Optimization
From MaRDI portal
Publication:5162661
DOI10.1137/20M134842XMaRDI QIDQ5162661
Laurent Massoulié, Hadrien Hendrikx, Francis Bach
Publication date: 5 November 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.10675
Related Items
Optimal Algorithms for Non-Smooth Distributed Optimization in Networks, Stochastic saddle-point optimization for the Wasserstein barycenter problem, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Decentralized personalized federated learning: lower bounds and optimal algorithm for all personalization modes, Recent theoretical advances in decentralized distributed convex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- \(\lambda_ 1\), isoperimetric inequalities for graphs, and superconcentrators
- Optimal scaling of a gradient method for distributed resource allocation
- Introductory lectures on convex optimization. A basic course.
- An optimal randomized incremental gradient method
- Distributed stochastic gradient tracking methods
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Asynchronous parallel algorithms for nonconvex optimization
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems
- A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Revisiting EXTRA for Smooth Distributed Optimization
- Accelerated, Parallel, and Proximal Coordinate Descent
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Katyusha: the first direct acceleration of stochastic gradient methods
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization