Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation

From MaRDI portal
Publication:3399259

DOI10.1137/060671917zbMATH Open1171.49022arXiv0704.3775OpenAlexW2003293420MaRDI QIDQ3399259FDOQ3399259


Authors: Zhen Wu, Zhiyong Yu Edit this on Wikidata


Publication date: 29 September 2009

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraints for the cost function where the cost function is described by the solution of one reflected backward stochastic differential equations. We will give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique viscosity solution of the obstacle problem for the corresponding Hamilton-Jacobi-Bellman equations.


Full work available at URL: https://arxiv.org/abs/0704.3775




Recommendations





Cited In (34)





This page was built for publication: Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3399259)