The maximum principle for positional controls and the problems of optimal system synthesis (Q1361961)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The maximum principle for positional controls and the problems of optimal system synthesis |
scientific article |
Statements
The maximum principle for positional controls and the problems of optimal system synthesis (English)
0 references
17 May 1998
0 references
The authors study the problem \[ \Phi \bigl(x(t_1) \bigr)\to \min, \quad \dot x= f(x,u,t) \] \[ u(x,t)\in V, \quad (x,t)\in\mathbb{R}^n \times T, \] where the optimal control is sought in the class of positional controls \(u=u(x,t)\). An analogue of Pontryagin's maximum principle directly in the class of positional controls is derived. The relations with the classical maximum principle and the dynamic programming method are considered. Examples are presented in which the Bellman function is not continuously differentiable so that there is no formal justification for using the classical Bellman equation whereas the positional maximum principle obtained in the paper successfully solves the problem of optimal system design.
0 references
necessary conditions
0 references
optimal system synthesis
0 references
positional controls
0 references
Pontryagin's maximum principle
0 references
dynamic programming
0 references
Bellman function
0 references