Dynamic optimization. Deterministic and stochastic models (Q332155)

From MaRDI portal





scientific article; zbMATH DE number 6644783
Language Label Description Also known as
default for all languages
No label defined
    English
    Dynamic optimization. Deterministic and stochastic models
    scientific article; zbMATH DE number 6644783

      Statements

      Dynamic optimization. Deterministic and stochastic models (English)
      0 references
      0 references
      0 references
      0 references
      0 references
      27 October 2016
      0 references
      Part I deals with deterministic dynamic optimization models describing the control of discrete-time systems. The problems are defined by the final sequences of states, actions, and by the transition function moving the system into the new state with the purpose of maximizing the sum of discounted reward functions or minimizing the sum of discounted cost functions. The authors prove the optimality criteria and develop procedures to solve the problems. In Chapter 5 they consider absorbing dynamic programs and solve the problem of searching cost-minimal subpaths in acyclic network. Special chapters are concerned with the structure of the value functions and of maximizers. Finally, the authors present the general theory for the case when the number of stages goes to infinity and derive asymptotically optimal decision rules. Part II is devoted to discrete-time stochastic control models. In Chapters 11, 12 the authors define control models with independent identical disturbances and Markovian decision processes with finite transition law. Problems of maximizing expected rewards and minimizing expected costs are considered. The structural properties of the optimal solution are studied just as the asymptotic behaviour of the solution for infinite set of states. Asymptotically optimal decision rules are deduced. The optimality criteria are proved and solution methods are developed. In Chapter 16 the authors investigate Markovian decision processes with arbitrary transition law, where states and actions are measurable spaces or separable metric spaces. The authors derive conditions when the problem can be solved and when the optimal policy exists. Special chapters are devoted to structural properties of the value functions. Part III ``Generalization of Markov Decision Processes'' is devoted to Markovian decision processes with disturbances and considers Markov renewal programs, Bayesian control models, and partially observable models. The book comprises a lot of examples, problems for readers, and supplements with additional comments for the advanced reader and with bibliographic notes.
      0 references
      dynamic programming
      0 references
      stochastic control
      0 references
      dynamic optimization
      0 references
      Markovian decision process
      0 references
      stochastic dynamic program
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references