Mean-field Markov decision processes with common noise and open-loop controls

From MaRDI portal
Publication:2135276

DOI10.1214/21-AAP1713zbMATH Open1491.90179arXiv1912.07883MaRDI QIDQ2135276FDOQ2135276


Authors: Médéric Motte, Huyên Pham Edit this on Wikidata


Publication date: 6 May 2022

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Abstract: We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence of common noise, and when optimization is performed over open-loop controls on infinite horizon. Such model, called CMKV-MDP for conditional McKean-Vlasov MDP, arises and is obtained here rigorously with a rate of convergence as the asymptotic problem of N-cooperative agents controlled by a social planner/influencer that observes the environment noises but not necessarily the individual states of the agents. We highlight the crucial role of relaxed controls and randomization hypothesis for this class of models with respect to classical MDP theory. We prove the correspondence between CMKV-MDP and a general lifted MDP on the space of probability measures, and establish the dynamic programming Bellman fixed point equation satisfied by the value function, as well as the existence of-optimal randomized feedback controls. The arguments of proof involve an original measurable optimal coupling for the Wasserstein distance. This provides a procedure for learning strategies in a large population of interacting collaborative agents. MSC Classification: 90C40, 49L20.


Full work available at URL: https://arxiv.org/abs/1912.07883




Recommendations




Cites Work


Cited In (14)





This page was built for publication: Mean-field Markov decision processes with common noise and open-loop controls

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2135276)