The Bellman equation for constrained deterministic optimal control problems (Q688889)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The Bellman equation for constrained deterministic optimal control problems |
scientific article |
Statements
The Bellman equation for constrained deterministic optimal control problems (English)
0 references
1 November 1993
0 references
The author considers the problem of time optimal control to a given target for the ODE system \(y'= f(y,u)\) with a state dependent control constraint \(u\in U(y)\), where \(U\) is assumed to be Lipschitzian with compact values. She proves that the value function is a viscosity solution of the associated Hamilton-Jacobi equation \(H(x,DW)= 0\). She also proves that, under certain conditions, any viscosity subsolution of the boundary value problem for the \(H\)-\(J\) equation yields a solution to the control problem. If the target is regular, she shows that the optimal value function is continuous, and she also obtains a condition which is both necessary and sufficient for optimality.
0 references
Bellman equation
0 references
dynamic programming
0 references
differential inclusion
0 references
time optimal control
0 references
Hamilton-Jacobi equation
0 references