Foundations of deterministic and stochastic control (Q699443)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Foundations of deterministic and stochastic control
scientific article

    Statements

    Foundations of deterministic and stochastic control (English)
    0 references
    0 references
    24 September 2002
    0 references
    This volume is a textbook on linear control systems with an emphasis on stochastic optimal control with solution methods using spectral factorization in line with the original approach of N. Wiener. Continuous-time and discrete-time versions are presented in parallel. When this is not the case, we mention it. Chapter one is an introduction to realization theory (this is where controllability and observability appear as important underlying concepts) and identification problems are tackled. Chapter two is devoted to the LQ output regulator, and a section focuses more specifically on tracking issues. Chapter three consists in a development of stability theory. Lyapunov's theory is presented first in the nonlinear case, and then several frequency-domain criteria (Nyquist criterion, circle theorem, small gain theorem) are provided for the (linear) continuous-time case. Chapter four is a short and practical introduction to probabilities and random processes with an emphasis on the discrete case. Since the book mainly deals with linear systems, it is convenient to use Gaussian variables so that they too are included at the end. The next chapter presents Kalman-Bucy filters in the discrete case. Chapter six is a pendant to chapter four for the continuous-time case. Chapter seven introduces stochastic optimal control problems in the discrete (mainly linear) case. The dynamic programming method is shown. The separation theorem is provided in both the discrete and the continuous-time cases. Chapter eight is devoted to the Luenberger observer (in the continuous-time situation). Chapter nine shifts the focus to estimation problems for nonlinear and finite state problems. A Bayesian approach for finite Markov processes is proposed. Chapter ten is devoted to Wiener filtering and spectral factorization. This is applied to the computation of optimal gains. Chapter eleven is a generalization of Wiener-Hopf factorization methods to the infinite-dimensional continuous-time case and optimal feedback gains are obtained and seen to be stabilizing. Chapter twelve is on spectral factorization methods for filters without Riccati equations also in the distributed continuous-time case; as previously, the closed-loop system is seen to be stable. The last two chapters show computational methods for solving the Riccati equation (Newton's method) and spectral factorization problems. Two appendices introduce functional analytic concepts and probability theory, and there are 77 references and an index. The chapters (except for the last two) end with problems. In the reviewer's opinion, the topics are juxtaposed without pointing out enough the relations between them when they exist. In particular, one should have stressed more explicitly the key links between controllability, stabilization and optimal control. It should also be mentioned that chapters one and nine do not match well with the spirit of the book outlined in the first two sentences of this review. However, the book presents in a clear way important concepts of control theory and can be used for teaching.
    0 references
    stochastic optimal control
    0 references
    spectral factorization
    0 references
    LQ output regulator
    0 references
    tracking
    0 references
    Kalman-Bucy filters
    0 references
    separation theorem
    0 references
    estimation
    0 references
    Wiener filtering
    0 references
    Riccati equation
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references