Learning Optimal Policies in Potential Mean Field Games: Smoothed Policy Iteration Algorithms

From MaRDI portal
Publication:6148451

DOI10.1137/22M1539861arXiv2212.04791OpenAlexW4391170322WikidataQ129504108 ScholiaQ129504108MaRDI QIDQ6148451FDOQ6148451


Authors: Qing Tang, Jiahao Song Edit this on Wikidata


Publication date: 7 February 2024

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: We introduce two Smoothed Policy Iteration algorithms ( extbf{SPI}s) as rules for learning policies and methods for computing Nash equilibria in second order potential Mean Field Games (MFGs). Global convergence is proved if the coupling term in the MFG system satisfy the Lasry Lions monotonicity condition. Local convergence to a stable solution is proved for system which may have multiple solutions. The convergence analysis shows close connections between extbf{SPI}s and the Fictitious Play algorithm, which has been widely studied in the MFG literature. Numerical simulation results based on finite difference schemes are presented to supplement the theoretical analysis.


Full work available at URL: https://arxiv.org/abs/2212.04791




Recommendations




Cites Work


Cited In (1)





This page was built for publication: Learning Optimal Policies in Potential Mean Field Games: Smoothed Policy Iteration Algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6148451)