Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
From MaRDI portal
Publication:2217433
DOI10.1007/S10994-019-05864-5OpenAlexW3100019413WikidataQ126308537 ScholiaQ126308537MaRDI QIDQ2217433FDOQ2217433
Authors: Emanuele Pesce, Giovanni Montana
Publication date: 29 December 2020
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1901.03887
Recommendations
- Learning multi-agent coordination through connectivity-driven communication
- Monotonic value function factorisation for deep multi-agent reinforcement learning
- Multi-agent reinforcement learning using ordinal action selection and approximate policy iteration
- scientific article; zbMATH DE number 1977927
- A leader-following paradigm based deep reinforcement learning method for multi-agent cooperation games
Cites Work
- Consensus in multi-agent systems with communication constraints
- Optimal and approximate Q-value functions for decentralized POMDPS
- Consensus and Cooperation in Networked Multi-Agent Systems
- Synchronization in networks of identical linear systems
- Network Topology and Communication Data Rate for Consensusability of Discrete-Time Multi-Agent Systems
- Distributed coordination architecture for multi-robot formation control
- Elevator group control using multiple reinforcement learning agents
- Forward induction in coordination games
Cited In (2)
Uses Software
This page was built for publication: Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2217433)