Optimal gradient tracking for decentralized optimization
From MaRDI portal
Publication:6608029
DOI10.1007/S10107-023-01997-7MaRDI QIDQ6608029FDOQ6608029
Authors: Lei Shi, Shi Pu, Ming Yan
Publication date: 19 September 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Cites Work
- ADD-OPT: Accelerated Distributed Directed Optimization
- On the convergence of decentralized gradient descent
- Distributed Subgradient Methods for Multi-Agent Optimization
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Fast Distributed Gradient Methods
- Multi-fidelity optimization via surrogate modelling
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Distributed asynchronous computation of fixed points
- Chebyshev Acceleration Techniques for Solving Nonsymmetric Eigenvalue Problems
- Convex optimization: algorithms and complexity
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Distributed Recursive Least-Squares: Stability and Performance Analysis
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- Harnessing Smoothness to Accelerate Distributed Optimization
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- Distributed Learning Algorithms for Spectrum Sharing in Spatial Random Access Wireless Networks
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Revisiting EXTRA for Smooth Distributed Optimization
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Accelerated Distributed Nesterov Gradient Descent
- Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Katyusha: the first direct acceleration of stochastic gradient methods
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
This page was built for publication: Optimal gradient tracking for decentralized optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6608029)