Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
From MaRDI portal
(Redirected from Publication:2217433)
Recommendations
- Learning multi-agent coordination through connectivity-driven communication
- Monotonic value function factorisation for deep multi-agent reinforcement learning
- Multi-agent reinforcement learning using ordinal action selection and approximate policy iteration
- scientific article; zbMATH DE number 1977927
- A leader-following paradigm based deep reinforcement learning method for multi-agent cooperation games
Cites work
- Consensus and Cooperation in Networked Multi-Agent Systems
- Consensus in multi-agent systems with communication constraints
- Distributed coordination architecture for multi-robot formation control
- Elevator group control using multiple reinforcement learning agents
- Forward induction in coordination games
- Network Topology and Communication Data Rate for Consensusability of Discrete-Time Multi-Agent Systems
- Optimal and approximate Q-value functions for decentralized POMDPS
- Synchronization in networks of identical linear systems
Cited in
(2)
This page was built for publication: Improving coordination in small-scale multi-agent deep reinforcement learning through memory-driven communication
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2217433)