BSDEs and risk-sensitive control, zero-sum and nonzero-sum game problems of stochastic functional differential equations. (Q2574593)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | BSDEs and risk-sensitive control, zero-sum and nonzero-sum game problems of stochastic functional differential equations. |
scientific article |
Statements
BSDEs and risk-sensitive control, zero-sum and nonzero-sum game problems of stochastic functional differential equations. (English)
0 references
29 November 2005
0 references
The authors study three problems from stochastic control and game theory. The first one is called the risk-sensitive control problem and can be described as follows: Let \((x(t):0\leq t\leq 1)\) be a solution of a stochastic functional differential equation \[ dx(t)=f(t,x(\cdot ),u(t))\,dt+ \sigma (t,x(\cdot ))\,dB,\qquad x(0)=x\in \mathbb R^d \] where \(B\) is a standard \(d\)-dimensional Brownian motion and \(u\) a control process with values in a compact metric space. Define a cost functional \[ J(u)=\mathbb E\,\exp \left \{\int _0^1h(s,x(\cdot ),u(s))\,ds+\xi \right \} \] where \(h\) is a bounded function and \(\xi \) a bounded random variable. We say that a control process \(u^*\) is an optimal control provided that \(J(u^*)\leq J(u)\) holds for every control process \(u\). The authors show in the first part of the paper that an optimal control exists and is characterized via a solution of an associated backward stochastic differential equation. The second part deals with a similar, so called zero-sum risk-sensitive game problem: Let \((x(t):0\leq t\leq 1)\) be a solution of \[ dx(t)=f(t,x(\cdot ),u(t),v(t))\,dt+\sigma (t,x(\cdot ))\,dB,\qquad x(0)=x\in \mathbb R^d, \] where \(u\) and \(v\) are control processes with values in some compact metric spaces. Define a cost functional \[ J(u,v)=\mathbb E\,\exp \left \{\int _0^1h(s,x(\cdot ),u(s),v(s))\,ds+\xi \right \} \] where \(h\) and \(\xi \) are bounded. We say that control processes (strategies) \(u^*\) and \(v^*\) are a saddle point for the game provided that \(J(u^*,v)\leq J(u^*,v^*)\leq J(u,v^*)\) holds for every pair of control processes \(u\) and \(v\). It is shown that a saddle point exists provided that Isaacs' condition holds, and, again, can be characterized via a solution of an associated backward stochastic differential equation. The last part is devoted to the risk-sensitive nonzero-sum game problem: Let \((x(t):0\leq t\leq 1)\) be a solution of the equation \[ dx(t)=f(t,x(t),u(t),v(t))\,dt+\sigma (t,x(t))\,dB,\qquad x(0)=x\in \mathbb R^d \] where \(u\) and \(v\) are control processes with values in some compact metric spaces as in the second part, and define two cost functionals \[ J_i(u,v)=\mathbb E\,\exp \left \{\int _0^1h_i(s,x(t),u(s),v(s))\,ds+g_i(x(1))\right \},\qquad i=1,2, \] where \(h_i\) and \(g_i\) are bounded for \(i=1,2\). We say that control processes (strategies) \(u^*\) and \(v^*\) are an equilibrium point for the game provided that \(J_1(u^*,v^*)\leq J_1(u,v^*)\) and \(J_2(u^*,v^*)\leq J_2(u^*,v)\) hold for every pair of control processes \(u\) and \(v\). It is shown that an equilibrium point exists provided that the generalized Isaacs' condition holds, and can be characterized via solutions of associated backward stochastic differential equations. The difference between the second and the third part of the paper is that the risk-sensitive nonzero-sum game problem is solved under stronger assumptions on the coefficients, namely the values of \(f\), \(\sigma \) and \(h\) in time \(t\) dependent only on the values of \(x(t)\) and not on the history up to time \(t\), and the random variable \(\xi \) must have a particular form -- unlike the zero-sum risk-sensitive game problem.
0 references
backward stochastic differential equations
0 references
zero-sum game
0 references
optimal control
0 references
saddle point
0 references
equilibrium point
0 references
0 references
0 references
0 references
0 references