Approximate receding horizon approach for Markov decision processes: average reward case (Q1414220)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Approximate receding horizon approach for Markov decision processes: average reward case |
scientific article |
Statements
Approximate receding horizon approach for Markov decision processes: average reward case (English)
0 references
20 November 2003
0 references
The authors consider an approximation scheme for solving Markov decision processes (MDPs) with countable state space, finite action space, and bounded rewards that uses an approximate solution of a fixed finite-horizon sub-MDP of a given infinite-horizon MDP to create a stationary policy, which they call ''approximate receding horizon control''. They analyze the performance of the approximate receding horizon control in some conditions, study two examples, also provide a simple proof on the policy improvement for countable state space, and discuss practical implementations of these schemes via simulation.
0 references
Markov decision process
0 references
receding horizon control
0 references
Infinite-horizon average reward
0 references
policy improvement
0 references
ergodicity
0 references
0 references
0 references
0 references
0 references
0 references