Supersolutions and subsolutions techniques in a minimax optimal control problem with infinite horizon (Q1287062)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Supersolutions and subsolutions techniques in a minimax optimal control problem with infinite horizon |
scientific article |
Statements
Supersolutions and subsolutions techniques in a minimax optimal control problem with infinite horizon (English)
0 references
20 July 1999
0 references
The infinite horizon problem with \(L^{\infty}\) cost functional is studied. The problem has dynamics \(dy/dt=g(y(t),\alpha(t))\) for \(t >0\) with \(y(0)=x \). The control \(\alpha\) is chosen to minimize the cost \(J(x,\alpha)=\text{ess sup}_{t \geq 0} f(y(t),\alpha(t))\). If we denote the associated value function by \(u(x)\), we see that \(u(x)\) may not be continuous, or even upper or lower semicontinuous. Formally, the Bellman equation should be \(\max\{H(x,u,Du),\min_a f(x,a)-u\}=0\) where \(H(x,r,p)=\min\{p\cdot g(x,a)| f(x,a) \leq r\}\), and this equation has no coercive term. Therefore, uniqueness is a serious problem and it seems that the problem may not be well posed. Examples of discontinuity are given in this paper and the main result is that the value function is the minimal supersolution of the equation. The equation is treated in integrated form throughout, i.e., through the use of the dynamic programming principle rather than via viscosity solutions.
0 references
minimax problem
0 references
infinite horizon problem
0 references
Bellman equation
0 references
dynamic programming
0 references