DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
From MaRDI portal
Publication:5095229
DOI10.1137/21M1450677OpenAlexW3202869544MaRDI QIDQ5095229
Boyue Li, Yuejie Chi, Zhize Li
Publication date: 5 August 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2110.01165
Analysis of algorithms and problem complexity (68Q25) Computational aspects of data analysis and big data (68T09) Communication complexity, information complexity (68Q11)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Chebyshev acceleration of iterative refinement
- Discrete-time dynamic average consensus
- Finite-sum smooth optimization with SARAH
- Fast linear iterations for distributed averaging
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed Control of Multiconsensus
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Balancing Communication and Computation in Distributed Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps
- A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization
This page was built for publication: DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization