An accelerated distributed gradient method with local memory
From MaRDI portal
Publication:2097691
DOI10.1016/j.automatica.2022.110260zbMath1504.93151OpenAlexW4296608771MaRDI QIDQ2097691
Xiaoxing Ren, Haibin Shao, Dewei Li, Yu-Geng Xi
Publication date: 14 November 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.automatica.2022.110260
Cites Work
- Unnamed Item
- Distributed gradient algorithm for constrained optimization with application to load sharing in power systems
- An anticipatory protocol to reach fast consensus in multi-agent systems
- Fast linear iterations for distributed averaging
- Distributed Coordinate Descent Method for Learning with Big Data
- Adding a Single State Memory Optimally Accelerates Symmetric Linear Maps
- Distributed Optimization Over Time-Varying Directed Graphs
- Fast Distributed Gradient Methods
- Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices
- Optimization and Analysis of Distributed Averaging With Short Node Memory
- Accelerated Distributed Average Consensus via Localized Node State Prediction
- Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Harnessing Smoothness to Accelerate Distributed Optimization
- Distributed Subgradient-Based Multiagent Optimization With More General Step Sizes
- Distributed Subgradient Methods for Multi-Agent Optimization
- Accelerated Distributed Nesterov Gradient Descent
- Distributed Heavy-Ball: A Generalization and Acceleration of First-Order Methods With Gradient Tracking
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Decentralized Optimization Over Time-Varying Directed Graphs With Row and Column-Stochastic Matrices
- Push–Pull Gradient Methods for Distributed Optimization in Networks
This page was built for publication: An accelerated distributed gradient method with local memory