Dynamic programming principle and Hamilton-Jacobi-Bellman equation under nonlinear expectation
From MaRDI portal
Publication:5864584
Abstract: In this paper, we study a stochastic recursive optimal control problem in which the value functional is defined by the solution of a backward stochastic differential equation (BSDE) under -expectation. Under standard assumptions, we establish the comparison theorem for this kind of BSDE and give a novel and simple method to obtain the dynamic programming principle. Finally, we prove that the value function is the unique viscosity solution of a type of fully nonlinear HJB equation.
Recommendations
- Dynamic programming principle for stochastic recursive optimal control problem driven by a \(G\)-Brownian motion
- Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation
- Dynamic programming principle and viscosity solutions of Hamilton-Jacobi-Bellman equations for stochastic recursive control problem with non-Lipschitz generator
- A stochastic recursive optimal control problem under the G-expectation framework
- A Generalized dynamic programming principle and hamilton-jacobi-bellman equation
Cites work
- scientific article; zbMATH DE number 1325009 (Why is no real title available?)
- A Generalized dynamic programming principle and hamilton-jacobi-bellman equation
- A theoretical framework for the pricing of contingent claims in the presence of model uncertainty
- Ambiguous volatility, possibility and utility in continuous time
- Backward Stochastic Differential Equations in Finance
- Backward stochastic differential equations driven by \(G\)-Brownian motion
- Comparison theorem, Feynman-Kac formula and Girsanov transformation for BSDEs driven by \(G\)-Brownian motion
- Controlled Markov processes and viscosity solutions
- Dynamic programming for general linear quadratic optimal stochastic control with random coefficients
- Dynamic programming principle for stochastic recursive optimal control problem driven by a \(G\)-Brownian motion
- Forward-backward stochastic differential equations and their applications
- Function spaces and capacity related to a sublinear expectation: application to \(G\)-Brownian motion paths
- Multi-dimensional \(G\)-Brownian motion and related stochastic calculus under \(G\)-expectation
- Nonlinear expectations and nonlinear Markov chains
- Nonlinear expectations and stochastic calculus under uncertainty. With robust CLT and \(G\)-Brownian motion
- On representation theorem of \(G\)-expectations and paths of \(G\)-Brownian motion
- Optimal control problems of fully coupled FBSDEs and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Optimal investment under model uncertainty in nondominated models
- Probabilistic interpretation for a system of quasilinear parabolic partial differential equation combined with algebra equations
- Probabilistic interpretation of a coupled system of Hamilton-Jacobi-Bellman equations
- Quasi-continuous random variables and processes under the \(G\)-expectation framework
- Robust utility maximization in nondominated models with 2BSDE: the uncertain volatility model
- Solving forward-backward stochastic differential equations explicitly -- a four step scheme
- Stochastic Differential Games and Viscosity Solutions of Hamilton–Jacobi–Bellman–Isaacs Equations
- The existence and uniqueness of viscosity solution to a kind of Hamilton-Jacobi-Bellman equation
- Two person zero-sum game in weak formulation and path dependent Bellman-Isaacs equation
- User’s guide to viscosity solutions of second order partial differential equations
- Wellposedness of second order backward SDEs
- \(G\)-expectation, \(G\)-Brownian motion and related stochastic calculus of Itô type
Cited in
(11)- Dynamic programming principle and viscosity solutions of Hamilton-Jacobi-Bellman equations for stochastic recursive control problem with non-Lipschitz generator
- Dynamic intertemporal utility optimization by means of Riccati transformation of Hamilton-Jacobi-Bellman equation
- Path-dependent dynamic programming principles and related path-dependent PDEs under \(G\)-expectation
- A stochastic recursive optimal control problem under the G-expectation framework
- Dynamic programming principle and associated Hamilton-Jacobi-Bellman equation for stochastic recursive control problem with non-Lipschitz aggregator
- Comparison principle for Hamilton-Jacobi-Bellman equations via a bootstrapping procedure
- Dynamic Programming Principle and Hamilton--Jacobi--Bellman Equations for Fractional-Order Systems
- Dynamic Programming Principle for One Kind of Stochastic Recursive Optimal Control Problem and Hamilton–Jacobi–Bellman Equation
- On the basis of the Hamilton-Jacobi-Bellman equation in economic dynamics
- A weak dynamic programming principle for combined optimal stopping/stochastic control with \({\mathcal E}^{f}\)-expectations
- The Bellman's principle of optimality in the discounted dynamic programming
This page was built for publication: Dynamic programming principle and Hamilton-Jacobi-Bellman equation under nonlinear expectation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5864584)