Towards accelerated rates for distributed optimization over time-varying networks
From MaRDI portal
Publication:6349702
DOI10.1007/978-3-030-91059-4_19zbMath1527.90162arXiv2009.11069OpenAlexW3211480018MaRDI QIDQ6349702
A. V. Gasnikov, Dmitry P. Kovalev, zbMATH Open Web Interface contents unavailable due to conflicting licenses., Egor Shulgin, Alexander Rogozin
Publication date: 23 September 2020
Full work available at URL: https://doi.org/10.1007/978-3-030-91059-4_19
Related Items
Min-max optimization over slowly time-varying graphs, Decentralized saddle-point problems with different constants of strong convexity and strong concavity, Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds, Decentralized optimization with affine constraints over time-varying networks, Recent theoretical advances in decentralized distributed convex optimization
Cites Work
- First-order methods of smooth convex optimization with inexact oracle
- Accelerated and unaccelerated stochastic gradient descent in model generality
- Fast Distributed Gradient Methods
- Revisiting EXTRA for Smooth Distributed Optimization
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization