Average optimality for continuous-time Markov decision processes with a policy iteration approach (Q2465179)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Average optimality for continuous-time Markov decision processes with a policy iteration approach |
scientific article |
Statements
Average optimality for continuous-time Markov decision processes with a policy iteration approach (English)
0 references
8 January 2008
0 references
The paper deals with the average expected reward criterion for continuous-time Markov decision processes in general state and action spaces. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. The author gives conditions on the system's primitive data and under which he proves the existence of the average reward optimality equation and an average optimal stationary policy. Also, under proposed conditions the author ensures the existence of \(\epsilon\)-average optimal stationary polices. Moreover, the author studies some properties of average optimal stationary polices. The author not only establishes another average optimality equation on an average optimal stationary policy, but also presents an interesting ``martingale characterization'' of such a policy. The approach presented in the paper is based on the policy iteration algorithm and is different from those (``vanishing discounting factor approach'', ``optimality inequality approach'') usually used in the literature. References contain \(31\) items.
0 references
continuous-time Markov decision processes
0 references
policy iteration algorithm
0 references
average criterion
0 references
optimality equation
0 references
optimal stationary policy
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references