Graphon mean-field control for cooperative multi-agent reinforcement learning

From MaRDI portal
Publication:6136535

DOI10.1016/J.JFRANKLIN.2023.09.002zbMATH Open1530.93012arXiv2209.04808OpenAlexW4386511453MaRDI QIDQ6136535FDOQ6136535

Xiaoli Wei, Yuanquan Hu, Junji Yan, Hengxi Zhang

Publication date: 17 January 2024

Published in: Journal of the Franklin Institute (Search for Journal in Brave)

Abstract: The marriage between mean-field theory and reinforcement learning has shown a great capacity to solve large-scale control problems with homogeneous agents. To break the homogeneity restriction of mean-field theory, a recent interest is to introduce graphon theory to the mean-field paradigm. In this paper, we propose a graphon mean-field control (GMFC) framework to approximate cooperative multi-agent reinforcement learning (MARL) with nonuniform interactions and show that the approximate order is of mathcalO(frac1sqrtN), with N the number of agents. By discretizing the graphon index of GMFC, we further introduce a smaller class of GMFC called block GMFC, which is shown to well approximate cooperative MARL. Our empirical studies on several examples demonstrate that our GMFC approach is comparable with the state-of-art MARL algorithms while enjoying better scalability.


Full work available at URL: https://arxiv.org/abs/2209.04808





Cites Work


Cited In (1)






This page was built for publication: Graphon mean-field control for cooperative multi-agent reinforcement learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136535)