Optimal feedback control (Q1355978)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Optimal feedback control |
scientific article |
Statements
Optimal feedback control (English)
0 references
2 June 1997
0 references
We consider the linear terminal problem \[ I(u)= c'x(t) \to \max, \tag{1} \] \[ \dot x= Ax+bu,\;x(0)= x_0, \tag{2} \] \[ x(t^*) \in X^*= \{x\in \mathbb{R}^n: Hx =g\}, \tag{3} \] \[ |u(t) |\leq 1,\;t\in T= [0,t^*] \tag{4} \] where \(x\in \mathbb{R}^n\), \(u\in \mathbb{R}\), \( g\in\mathbb{R}^n\), rank \(H=m<n\). In our view, the problem of synthesis of optimal systems is closely connected with the specific character of the control process in real time. It seems that this has been not investigated systematically in mathematics. We talk about continuously forming extremal problems in real time and the corresponding continuous correction of current solutions to these extremal problems. Each problem is of a program type and needs considerable computer time for its solution. However, since the parameters of continuous series of extremal problems are connected in a continuous way, it is advisable to consider them not as independent but as elements of a unique continuous chain of problems. In the modern theory of extremal problems there exist methods of correction of solutions and they are much more effective measured in computer time than methods of complete solution. Consequently, the notion of solution of the totality of extremal problems as a process of continuous successive correction of current solutions is natural for the theory of optimal feedback control. We can interpret the proposed approach as a unification of ideas of optimal program control and invariant embedding. Program control is the main object of the maximum principle and invariant embedding is the main means of dynamic programming. The approach to solving the synthesis problem is based on schemes worked out in Minsk during 1975-1990. The main object of study is discrete systems obtained from continuous ones (except Chapter 2). In Chapter 1, an optimization method of discrete control systems is proposed. Its description is necessary for understanding the following chapters. In Chapter 2 some questions connected with optimization of continuous control systems in real time are considered. In Chapter 3 the problem of synthesis for control systems under conditions of uncertainty is considered. Different controllers for systems operating under various information conditions are constructed. In Chapter 4 the approach proposed in Chapter 3 is generalized. Optimal feedback control is designed on the basis of incomplete and inexact data provided by the measuring device. In the Appendix the adaptive method of linear programming is presented. This method was described by R. Gabasov, F. M. Kirillova and O. I. Kostyukova in the late 1980s. Its constructions are used in Chapter 1 to justify a finite algorithm to solve the terminal optimal control problem under restrictions on terminal states and control.
0 references
optimal feedback control
0 references
linear terminal problem
0 references
synthesis
0 references
real time
0 references
computer time
0 references
maximum principle
0 references
dynamic programming
0 references
discrete control systems
0 references
uncertainty
0 references
adaptive method
0 references
linear programming
0 references