Graphon mean-field control for cooperative multi-agent reinforcement learning
DOI10.1016/J.JFRANKLIN.2023.09.002zbMATH Open1530.93012arXiv2209.04808OpenAlexW4386511453MaRDI QIDQ6136535FDOQ6136535
Authors: Yuanquan Hu, Xiaoli Wei, Junji Yan, Hengxi Zhang
Publication date: 17 January 2024
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2209.04808
Recommendations
- Mean-field controls with Q-learning for cooperative MARL: convergence and complexity analysis
- Model-free mean-field reinforcement learning: mean-field MDP and mean-field Q-learning
- Graphon mean field games and their equations
- Unified reinforcement Q-learning for mean field game and control problems
- A General Framework for Learning Mean-Field Games
Learning and adaptive systems in artificial intelligence (68T05) Applications of graph theory (05C90) Multi-agent systems (93A16)
Cites Work
- Large networks and graph limits
- High-dimensional statistics. A non-asymptotic viewpoint
- Mean field games
- Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle
- Probabilistic analysis of mean-field games
- Distributed learning and cooperative control for multi-agent systems
- Mean field games: A toy model on an Erdös-Renyi graph.
- Stochastic graphon games. I: The static case
- Mean field game of controls and an application to trade crowding
- Graphon Control of Large-Scale Networks of Linear Systems
- Finite mean field games: fictitious play and convergence to a first order continuous mean field game
- Mean-field controls with Q-learning for cooperative MARL: convergence and complexity analysis
- Convergence of weighted empirical measures
Cited In (1)
This page was built for publication: Graphon mean-field control for cooperative multi-agent reinforcement learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136535)