Stochastic target games and dynamic programming via regularized viscosity solutions

From MaRDI portal
Publication:2800366

DOI10.1287/MOOR.2015.0718zbMATH Open1334.93178arXiv1307.5606OpenAlexW1825269578MaRDI QIDQ2800366FDOQ2800366

Marcel Nutz, Bruno Bouchard

Publication date: 15 April 2016

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Abstract: We study a class of stochastic target games where one player tries to find a strategy such that the state process almost-surely reaches a given target, no matter which action is chosen by the opponent. Our main result is a geometric dynamic programming principle which allows us to characterize the value function as the viscosity solution of a non-linear partial differential equation. Because abstract mea-surable selection arguments cannot be used in this context, the main obstacle is the construction of measurable almost-optimal strategies. We propose a novel approach where smooth supersolutions are used to define almost-optimal strategies of Markovian type, similarly as in ver-ification arguments for classical solutions of Hamilton--Jacobi--Bellman equations. The smooth supersolutions are constructed by an exten-sion of Krylov's method of shaken coefficients. We apply our results to a problem of option pricing under model uncertainty with different interest rates for borrowing and lending.


Full work available at URL: https://arxiv.org/abs/1307.5606




Recommendations




Cites Work


Cited In (12)





This page was built for publication: Stochastic target games and dynamic programming via regularized viscosity solutions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2800366)