State and Control Path-Dependent Stochastic Zero-Sum Differential Games: Viscosity Solutions of Path-Dependent Hamilton-Jacobi-Isaacs Equations
From MaRDI portal
Publication:6328352
arXiv1911.00315MaRDI QIDQ6328352FDOQ6328352
Authors: Jun-Hee Moon
Publication date: 1 November 2019
Abstract: In this paper, we consider state and control path-dependent stochastic zero-sum differential games, where the dynamics and the running cost include both state and control paths of the players. Using the notion of nonanticipative strategies, we define lower and upper value functionals, which are functions of the initial state and control paths of the players. We prove that the value functionals satisfy the dynamic programming principle. The associated lower and upper Hamilton-Jacobi-Isaacs (HJI) equations from the dynamic programming principle are state and control path-dependent nonlinear second-order partial differential equations. We apply the functional It^o calculus to prove that the lower and upper value functionals are viscosity solutions of (lower and upper) state and control path-dependent HJI equations, where the notion of viscosity solutions is defined on a compact subset of an -H"older space introduced in cite{Tang_DCD_2015}. Moreover, we show that the Isaacs condition and the uniqueness of viscosity solutions imply the existence of the game value. For the state path-dependent case, we prove the uniqueness of classical solutions for the (state path-dependent) HJI equations.
Differential games and control (49N70) Dynamic programming in optimal control and differential games (49L20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25)
This page was built for publication: State and Control Path-Dependent Stochastic Zero-Sum Differential Games: Viscosity Solutions of Path-Dependent Hamilton-Jacobi-Isaacs Equations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6328352)