Multiagent Fully Decentralized Value Function Learning With Linear Convergence Rates
From MaRDI portal
Publication:4990263
DOI10.1109/TAC.2020.2995814OpenAlexW3027121709MaRDI QIDQ4990263FDOQ4990263
Authors: Lucas Cassano, Kun Yuan, Ali H. Sayed
Publication date: 28 May 2021
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.07792
Recommendations
- Partially decentralized reinforcement learning in finite, multi-agent Markov decision processes
- Distributed consensus-based multi-agent temporal-difference learning
- scientific article; zbMATH DE number 2219429
- Finite-Time Convergence Rates of Decentralized Stochastic Approximation With Applications in Multi-Agent and Multi-Task Learning
- Byzantine-Resilient Decentralized Policy Evaluation With Linear Function Approximation
- Provably efficient reinforcement learning in decentralized general-sum Markov games
- Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization
- Distributed multi-agent temporal-difference learning with full neighbor information
Cited In (5)
- Distributed consensus-based multi-agent temporal-difference learning
- Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach
- Byzantine-Resilient Decentralized Policy Evaluation With Linear Function Approximation
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks
- Decentralized Q-Learning for Stochastic Teams and Games
This page was built for publication: Multiagent Fully Decentralized Value Function Learning With Linear Convergence Rates
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4990263)