Three different operations research models for the same \((s,S)\) policy. (Q5929484)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Three different operations research models for the same (s,S) policy. |
scientific article; zbMATH DE number 1585108
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Three different operations research models for the same \((s,S)\) policy. |
scientific article; zbMATH DE number 1585108 |
Statements
Three different operations research models for the same \((s,S)\) policy. (English)
0 references
2001
0 references
Summary: Operations research techniques are usually presented as distinct models. Difficult as it may often be, achieving linkage between these models could reveal their interdependency and make them easier for the user to understand. In this article three different models, namely Markov chain, dynamic programming, and Markov sequential decision processes, are used to solve an inventory problem based on the periodic review system. We show how the three models converge to the same \((S,S)\) policy and we provide a numerical example to illustrate such a convergence.
0 references
periodic review system
0 references
Markov chain
0 references
dynamic programming
0 references
Markov sequential decision processes
0 references
0.8014459013938904
0 references
0.7751449346542358
0 references
0.7751448154449463
0 references
0.7744280695915222
0 references