Average optimality for continuous-time Markov decision processes in Polish spaces

From MaRDI portal
Publication:997948

DOI10.1214/105051606000000105zbMATH Open1160.90010arXivmath/0607098OpenAlexW2026850974MaRDI QIDQ997948FDOQ997948

Xianping Guo, Ulrich Rieder

Publication date: 8 August 2007

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Abstract: This paper is devoted to studying the average optimality in continuous-time Markov decision processes with fairly general state and action spaces. The criterion to be maximized is expected average rewards. The transition rates of underlying continuous-time jump Markov processes are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. We first provide two optimality inequalities with opposed directions, and also give suitable conditions under which the existence of solutions to the two optimality inequalities is ensured. Then, from the two optimality inequalities we prove the existence of optimal (deterministic) stationary policies by using the Dynkin formula. Moreover, we present a ``semimartingale characterization of an optimal stationary policy. Finally, we use a generalized Potlach process with control to illustrate the difference between our conditions and those in the previous literature, and then further apply our results to average optimal control problems of generalized birth--death systems, upwardly skip-free processes and two queueing systems. The approach developed in this paper is slightly different from the ``optimality inequality approach widely used in the previous literature.


Full work available at URL: https://arxiv.org/abs/math/0607098





Cites Work


Cited In (30)






This page was built for publication: Average optimality for continuous-time Markov decision processes in Polish spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q997948)