Optimal gradient tracking for decentralized optimization
From MaRDI portal
Publication:6608029
Recommendations
- Balancing communication and computation in gradient tracking algorithms for decentralized optimization
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization
- Communication-efficient algorithms for decentralized and stochastic optimization
- Provably Accelerated Decentralized Gradient Method Over Unbalanced Directed Graphs
- Distributed stochastic gradient tracking methods
Cites work
- A Decentralized Proximal-Gradient Method With Network Independent Step-Sizes and Separated Convergence Rates
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- ADD-OPT: Accelerated Distributed Directed Optimization
- Accelerated Distributed Nesterov Gradient Descent
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Chebyshev Acceleration Techniques for Solving Nonsymmetric Eigenvalue Problems
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- Convex optimization: algorithms and complexity
- DLM: Decentralized Linearized Alternating Direction Method of Multipliers
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Distributed Algorithms for Composite Optimization: Unified Framework and Convergence Analysis
- Distributed Learning Algorithms for Spectrum Sharing in Spatial Random Access Wireless Networks
- Distributed Recursive Least-Squares: Stability and Performance Analysis
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed asynchronous computation of fixed points
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Exact Diffusion for Distributed Optimization and Learning—Part I: Algorithm Development
- Fast Convergence Rates for Distributed Non-Bayesian Learning
- Fast Distributed Gradient Methods
- Harnessing Smoothness to Accelerate Distributed Optimization
- Katyusha: the first direct acceleration of stochastic gradient methods
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- Multi-fidelity optimization via surrogate modelling
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- On the convergence of decentralized gradient descent
- Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Revisiting EXTRA for Smooth Distributed Optimization
Cited in
(1)
This page was built for publication: Optimal gradient tracking for decentralized optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6608029)