General existence of solutions to dynamic programming equations (Q2347696): Difference between revisions
From MaRDI portal
Latest revision as of 04:12, 10 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | General existence of solutions to dynamic programming equations |
scientific article |
Statements
General existence of solutions to dynamic programming equations (English)
0 references
5 June 2015
0 references
Under certain conditions it is possible to construct a deterministic or stochastic discrete game from a partial differential equation in a bounded domain \(\Omega\), whose value function converges to the solution of the partial differential equation. In the proof of these game-theoretic approximations it is necessary to establish the dynamic programming principle. In Section 1 the authors prove a general theorem regarding a certain two-player, zero-sum game, also known as tug-of-war game, which extends the classical Perron's method: If the value functions \(u\) for the game is given by \(u^\epsilon (x) = \frac{1}{2}\sup_{B_\epsilon(x)} e^\epsilon + \frac{1}{2}\sup_{B_\epsilon(x)} e^\epsilon\) for \(x\in \Omega\) and \(u^\epsilon = g\) on \(\mathbb{R}^n\backslash \Omega\), where \(g\) is assumed to extend to a bounded and continuous function on \(\mathbb{R}^n\backslash \Omega\), then both the subsolution and the supersolution are solutions to the game. They use this theorem to prove a general existence theorem for a large class of operators between metric spaces in Section 3. In Section 4 a result on boundedness is proven, while they deduce uniqueness from a comparison result in Section 5. Section 2 discusses the infinity Laplacian, mean curvature flow and Hamilton-Jacobi equations in view of the theorems proven in the other sections.
0 references
dynamic programming principle
0 references
differential games
0 references
nonlinear partial differential equations
0 references
0 references
0 references
0 references
0 references
0 references