The robust maximum principle. Theory and applications. (Q636047)

From MaRDI portal
scientific article
Language Label Description Also known as
English
The robust maximum principle. Theory and applications.
scientific article

    Statements

    The robust maximum principle. Theory and applications. (English)
    0 references
    25 August 2011
    0 references
    This well structured and really clearly written book is worth reading both for readers interested in theory and applications of optimal control. For theory, since robustness embraces dependencies and sensitivities with respect to deterministic or stochastic uncertainties, and for applications, since robustness of a method is responsible for a good use of numerical results. The authors write in their Preface that most of the material given in this book has been tested in class at different institutes and that some studies present main results of Ph.D. theses of their students. Furthermore they write that `this book is for experts, scientists, and researchers in the field of control theory', and they add that it could also be of interest for applications in complex cases, in engineering and management science. Optimal control is a rapidly growing field starting in the fifties with the maximum principle and Bellman's dynamic programming. So in Part I of the book, first a review of classical optimal control theory (67 pages) is given in a modern way of speaking and with proofs. Then, secondly, two important examples, linear quadratic optimal control and time optimization follow in detail (57 pages). Part II (60 pages) is devoted to introduce the tent method, used in this book as basic mathematical tool for proofs and also for links to separability. Often in this book historical remarks are added, so here, e.g., to former results of Dubovitzki and Miljutin and even to results of Farkas. Some other historical remarks are given e.g. with respect to Hestenes (`smooth' maximum principle), to Feldbaum (first results in nonclassical time-optimization problems), to Pontryagin (conjecture on local sufficiency of the maximum principle) and on the notation `Pontryagin maximum principle'. Then the big main chapters follow, Part III: Robust Maximum Principle for Deterministic Systems (152 pages), Part IV: Robust Maximum Principle for Stochastic Systems (80 Pages), a list of references (123 items) and an index (3 pages). Robust optimization is studied first where the uncertainty set is given by a finite parameter set. Then the Hamiltonian appears as the sum of standard Hamiltonians corresponding to a fixed value of the uncertainty parameter. Maximum principles which look as usual and are necessary for robust optimality are proved and several examples show their usefulness. One of the proved principles is for a Min-Max Mayer Problem, where Max is taken over the uncertainty set and Min with respect to the controls. Another one is proved for a Min-Max Bolza Problem. In detail, a linear control system is studied where under certain conditions the maximum principle is necessary and sufficient. Under special circumstances piecewise constancy of robust controls for polyhedral resource sets is obtained. Secondly, robust maximum principles are proved for compact measurable sets. As the authors write, it is shown that the robust optimal control, minimizing the worst parametric value of the terminal functional, maximizes the Lebesgue-Stieltjes integral of the standard Hamiltonian (calculated for a fixed parameter value), taken over the uncertainty parameter set. Part III is completed by robust dynamic programming procedures in the `smooth' case and contains Hamilton-Jacobi-Bellman equations, relationships to robust maximum principles, sliding-mode control and (30 pages) multimodel differential games (robust Nash equilibrium, prey-predator differential games with the missile guidance problem as example, numerical procedures with convergence proof). Part IV is devoted to stochastic robust optimal control. After a historical introduction on Max-Min stochastic control, the authors explain that the main purpose of this fourth part is to explore possibilities of the maximum principle approach for a class of Min-Max control problems for uncertain systems given by a system of stochastic differential equations with a controlled diffusion term and unknown parameters within a given finite set. For simplicity the Min-Max problem is taken to belong to the class of optimization problems on a fixed finite horizon where the cost function contains only a terminal term. The special technique specific for stochastic calculus is fully used in the last section of the book and supports the recommendation that it is a pleasure to read this book.
    0 references
    optimal control
    0 references
    maximum principle
    0 references
    dynamic programming
    0 references
    robustness
    0 references
    differential games
    0 references
    tent method
    0 references
    uncertainty set
    0 references
    stochastic control
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references