The master equation in mean field theory (Q2344557): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Added link to MaRDI item.
links / mardi / namelinks / mardi / name
 

Revision as of 16:20, 2 February 2024

scientific article
Language Label Description Also known as
English
The master equation in mean field theory
scientific article

    Statements

    The master equation in mean field theory (English)
    0 references
    0 references
    0 references
    0 references
    15 May 2015
    0 references
    The authors derive the \textit{master equation} for (1) mean field type control, and (2) mean field games. The state \(x\) of the system satisfies a stochastic differential equation with a right-hand side depending on \(x\), a control/feedback \(v\) and \(m\), the probability density of \(x\). The variable \(x\) models a \textit{representative agent} that is influenced by a large population of similar agents, in mean field approximation. Hence, only their probability density \(m\) appears. Let the random variable \(x\) be the solution of the SDE driven by \(v\), and let \(m_v\) be the probability density of \(x\). In a mean field type control problem, one minimizes a functional \(J\) over the space of feedbacks \(v\), where \(J\) depends on the probability density \(m=m_v\). Hence, one essentially searches for \((\hat{v},m_{\hat{v}})\) such that \(J(\hat{v},m_{\hat{v}})\leqslant J(v,m_v)\) for all \(v\). In a mean field game, however, one minimizes \(J\) over the space of feedbacks, but fixes \(m\) first. Additionally, one imposes that \(m\) should be the probability density of the state \(\hat{x}\) driven by the \textit{optimal} feedback \(\hat{v}\); hence, \(m=m_{\hat{v}}\). Essentially, one searches \((\hat{v},m_{\hat{v}})\) such that \(J(\hat{v},m_{\hat{v}})\leqslant J(v,m_{\hat{v}})\) for all \(v\). The authors derive in Section 2 (mainly formally) the master equation for mean field type control and mean field games. In Sections 3 and 4, the authors incorporate an extra stochastic contribution (with parameter \(\beta\)) in such a way that stochasticity remains present in the resulting PDEs. For the specific choice of \textit{linear quadratic problems}, the authors obtain in Section 5 semi-explicit solutions to the master equation both for mean field type control and for mean field games. In the case \(\beta=0\), they recover the results of a preprint by \textit{A. Bensoussan}, \textit{J. Sung}, \textit{P. Yam} and \textit{S. P. Yung} [``Linear-quadratic mean field games'', \url{arXiv:1404.5741}]. Throughout most of the paper, \(m\) is considered as an element of \(L^2(\mathbb{R}^n)\), or rather \(L^1\cap L^2\), and accordingly Gâteaux functional derivatives are used. In Section 6, \(m\) is taken a sum of \(N\) Dirac measures with derivatives being taken with respect to the Dirac positions, hence in \(\mathbb{R}^{N}\). A particle approximation to the master equation is constructed, which is especially relevant in view of the interpretation of mean field games as the limit \(N\to\infty\) for Nash equilibrium games with a finite number \(N\) of players. Finally, a particular application (\textit{systemic risk}) is considered in Section 7, and the authors recover the results of \textit{R. Carmona} et al. [Commun. Math. Sci. 13, No. 4, 911--933 (2015; Zbl 1337.91031)].
    0 references
    master equation
    0 references
    mean field type control
    0 references
    mean field games
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references