Policy Iteration Reinforcement Learning Method for Continuous-time Mean-Field Linear-Quadratic Optimal Problem

From MaRDI portal
Publication:6434801

arXiv2305.00424MaRDI QIDQ6434801FDOQ6434801


Authors: Na Li, Xun Li, Zuo Quan Xu Edit this on Wikidata


Publication date: 30 April 2023

Abstract: This paper employs a policy iteration reinforcement learning (RL) method to investigate continuous-time mean-field linear quadratic problems over an infinite horizon. The drift and diffusion terms in the dynamics involve the state as well as the control. The stability and convergence of the RL algorithm are examined using a Lyapunov Recursion. Instead of solving a pair of coupled Riccati equations, the RL technique focuses on strengthening an auxiliary function and the cost functional as the objective functions and updating the new policy to compute the optimal control via state trajectories. A numerical example sheds light on the established theoretical results.













This page was built for publication: Policy Iteration Reinforcement Learning Method for Continuous-time Mean-Field Linear-Quadratic Optimal Problem

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6434801)