Fully asynchronous policy evaluation in distributed reinforcement learning over networks
From MaRDI portal
Publication:2063869
DOI10.1016/j.automatica.2021.110092zbMath1480.93027arXiv2003.00433OpenAlexW4200113240MaRDI QIDQ2063869
Tamer Başar, Jiaqi Zhang, Xingyu Sha, Kaiqing Zhang, Keyou You
Publication date: 3 January 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2003.00433
Learning and adaptive systems in artificial intelligence (68T05) Distributed algorithms (68W15) Multi-agent systems (93A16) Networked control (93B70)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- Convergence rate for consensus with delays
- Multi-agent reinforcement learning: a selective overview of theories and algorithms
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Product of Random Stochastic Matrices and Distributed Averaging
- Distributed Convex Optimization with Inequality Constraints over Time-Varying Unbalanced Digraphs
- Convergence of Asynchronous Distributed Gradient Methods Over Stochastic Networks
- ${{\cal Q} {\cal D}}$-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through ${\rm Consensus} + {\rm Innovations}$
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Multiagent Fully Decentralized Value Function Learning With Linear Convergence Rates
- Variance-Reduced Decentralized Stochastic Optimization With Accelerated Convergence
- AsySPA: An Exact Asynchronous Algorithm for Convex Optimization Over Digraphs
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Push–Pull Gradient Methods for Distributed Optimization in Networks
- Asynchronous Gradient Push