Dynamic optimization. Deterministic and stochastic models (Q332155): Difference between revisions
From MaRDI portal
Created a new Item |
Set OpenAlex properties. |
||
(3 intermediate revisions by 3 users not shown) | |||
Property / review text | |||
Part I deals with deterministic dynamic optimization models describing the control of discrete-time systems. The problems are defined by the final sequences of states, actions, and by the transition function moving the system into the new state with the purpose of maximizing the sum of discounted reward functions or minimizing the sum of discounted cost functions. The authors prove the optimality criteria and develop procedures to solve the problems. In Chapter 5 they consider absorbing dynamic programs and solve the problem of searching cost-minimal subpaths in acyclic network. Special chapters are concerned with the structure of the value functions and of maximizers. Finally, the authors present the general theory for the case when the number of stages goes to infinity and derive asymptotically optimal decision rules. Part II is devoted to discrete-time stochastic control models. In Chapters 11, 12 the authors define control models with independent identical disturbances and Markovian decision processes with finite transition law. Problems of maximizing expected rewards and minimizing expected costs are considered. The structural properties of the optimal solution are studied just as the asymptotic behaviour of the solution for infinite set of states. Asymptotically optimal decision rules are deduced. The optimality criteria are proved and solution methods are developed. In Chapter 16 the authors investigate Markovian decision processes with arbitrary transition law, where states and actions are measurable spaces or separable metric spaces. The authors derive conditions when the problem can be solved and when the optimal policy exists. Special chapters are devoted to structural properties of the value functions. Part III ``Generalization of Markov Decision Processes'' is devoted to Markovian decision processes with disturbances and considers Markov renewal programs, Bayesian control models, and partially observable models. The book comprises a lot of examples, problems for readers, and supplements with additional comments for the advanced reader and with bibliographic notes. | |||
Property / review text: Part I deals with deterministic dynamic optimization models describing the control of discrete-time systems. The problems are defined by the final sequences of states, actions, and by the transition function moving the system into the new state with the purpose of maximizing the sum of discounted reward functions or minimizing the sum of discounted cost functions. The authors prove the optimality criteria and develop procedures to solve the problems. In Chapter 5 they consider absorbing dynamic programs and solve the problem of searching cost-minimal subpaths in acyclic network. Special chapters are concerned with the structure of the value functions and of maximizers. Finally, the authors present the general theory for the case when the number of stages goes to infinity and derive asymptotically optimal decision rules. Part II is devoted to discrete-time stochastic control models. In Chapters 11, 12 the authors define control models with independent identical disturbances and Markovian decision processes with finite transition law. Problems of maximizing expected rewards and minimizing expected costs are considered. The structural properties of the optimal solution are studied just as the asymptotic behaviour of the solution for infinite set of states. Asymptotically optimal decision rules are deduced. The optimality criteria are proved and solution methods are developed. In Chapter 16 the authors investigate Markovian decision processes with arbitrary transition law, where states and actions are measurable spaces or separable metric spaces. The authors derive conditions when the problem can be solved and when the optimal policy exists. Special chapters are devoted to structural properties of the value functions. Part III ``Generalization of Markov Decision Processes'' is devoted to Markovian decision processes with disturbances and considers Markov renewal programs, Bayesian control models, and partially observable models. The book comprises a lot of examples, problems for readers, and supplements with additional comments for the advanced reader and with bibliographic notes. / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Svetlana A. Kravchenko / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90-01 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90-02 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C39 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C40 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90B10 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 93E20 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 60J20 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 60J05 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6644783 / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
dynamic programming | |||
Property / zbMATH Keywords: dynamic programming / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
stochastic control | |||
Property / zbMATH Keywords: stochastic control / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
dynamic optimization | |||
Property / zbMATH Keywords: dynamic optimization / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
Markovian decision process | |||
Property / zbMATH Keywords: Markovian decision process / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
stochastic dynamic program | |||
Property / zbMATH Keywords: stochastic dynamic program / rank | |||
Normal rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/978-3-319-48814-1 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W4251081320 / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 01:28, 20 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Dynamic optimization. Deterministic and stochastic models |
scientific article |
Statements
Dynamic optimization. Deterministic and stochastic models (English)
0 references
27 October 2016
0 references
Part I deals with deterministic dynamic optimization models describing the control of discrete-time systems. The problems are defined by the final sequences of states, actions, and by the transition function moving the system into the new state with the purpose of maximizing the sum of discounted reward functions or minimizing the sum of discounted cost functions. The authors prove the optimality criteria and develop procedures to solve the problems. In Chapter 5 they consider absorbing dynamic programs and solve the problem of searching cost-minimal subpaths in acyclic network. Special chapters are concerned with the structure of the value functions and of maximizers. Finally, the authors present the general theory for the case when the number of stages goes to infinity and derive asymptotically optimal decision rules. Part II is devoted to discrete-time stochastic control models. In Chapters 11, 12 the authors define control models with independent identical disturbances and Markovian decision processes with finite transition law. Problems of maximizing expected rewards and minimizing expected costs are considered. The structural properties of the optimal solution are studied just as the asymptotic behaviour of the solution for infinite set of states. Asymptotically optimal decision rules are deduced. The optimality criteria are proved and solution methods are developed. In Chapter 16 the authors investigate Markovian decision processes with arbitrary transition law, where states and actions are measurable spaces or separable metric spaces. The authors derive conditions when the problem can be solved and when the optimal policy exists. Special chapters are devoted to structural properties of the value functions. Part III ``Generalization of Markov Decision Processes'' is devoted to Markovian decision processes with disturbances and considers Markov renewal programs, Bayesian control models, and partially observable models. The book comprises a lot of examples, problems for readers, and supplements with additional comments for the advanced reader and with bibliographic notes.
0 references
dynamic programming
0 references
stochastic control
0 references
dynamic optimization
0 references
Markovian decision process
0 references
stochastic dynamic program
0 references