Dynamic programming principle and Hamilton-Jacobi-Bellman equation under nonlinear expectation

From MaRDI portal
Publication:5864584

DOI10.1051/COCV/2022019zbMATH Open1492.93199arXiv2106.02814OpenAlexW3168988147WikidataQ114011477 ScholiaQ114011477MaRDI QIDQ5864584FDOQ5864584

Xiaojuan Li, Shaolin Ji, Mingshang Hu

Publication date: 8 June 2022

Published in: ESAIM: Control, Optimisation and Calculus of Variations (Search for Journal in Brave)

Abstract: In this paper, we study a stochastic recursive optimal control problem in which the value functional is defined by the solution of a backward stochastic differential equation (BSDE) under ildeG-expectation. Under standard assumptions, we establish the comparison theorem for this kind of BSDE and give a novel and simple method to obtain the dynamic programming principle. Finally, we prove that the value function is the unique viscosity solution of a type of fully nonlinear HJB equation.


Full work available at URL: https://arxiv.org/abs/2106.02814




Recommendations




Cites Work


Cited In (6)





This page was built for publication: Dynamic programming principle and Hamilton-Jacobi-Bellman equation under nonlinear expectation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5864584)