On the Divergence of Decentralized Nonconvex Optimization
From MaRDI portal
Publication:5056326
DOI10.1137/20M1353149MaRDI QIDQ5056326
Junyu Zhang, Haoran Sun, Mingyi Hong, Siliang Zeng
Publication date: 8 December 2022
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.11662
Related Items (1)
Uses Software
Cites Work
- Introductory lectures on convex optimization. A basic course.
- On the Convergence of Decentralized Gradient Descent
- Adaptation, Learning, and Optimization over Networks
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Decentralized Frank–Wolfe Algorithm for Convex and Nonconvex Problems
- Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis
- Diffusion LMS Strategies for Distributed Estimation
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- On Nonconvex Decentralized Gradient Descent
- Distributed Subgradient Methods for Multi-Agent Optimization
- MultiLevel Composite Stochastic Optimization via Nested Variance Reduction
- Asynchronous Optimization Over Graphs: Linear Convergence Under Error Bound Conditions
- Distributed Optimization Based on Gradient Tracking Revisited: Enhancing Convergence Rate via Surrogation
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Performance of a Distributed Stochastic Approximation Algorithm
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
This page was built for publication: On the Divergence of Decentralized Nonconvex Optimization