Discounted continuous-time constrained Markov decision processes in Polish spaces

From MaRDI portal
Publication:655591

DOI10.1214/10-AAP749zbMATH Open1258.90104arXiv1201.0089MaRDI QIDQ655591FDOQ655591

Xin-Yuan Song, Xianping Guo

Publication date: 4 January 2012

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Abstract: This paper is devoted to studying constrained continuous-time Markov decision processes (MDPs) in the class of randomized policies depending on state histories. The transition rates may be unbounded, the reward and costs are admitted to be unbounded from above and from below, and the state and action spaces are Polish spaces. The optimality criterion to be maximized is the expected discounted rewards, and the constraints can be imposed on the expected discounted costs. First, we give conditions for the nonexplosion of underlying processes and the finiteness of the expected discounted rewards/costs. Second, using a technique of occupation measures, we prove that the constrained optimality of continuous-time MDPs can be transformed to an equivalent (optimality) problem over a class of probability measures. Based on the equivalent problem and a so-called -weak convergence of probability measures developed in this paper, we show the existence of a constrained optimal policy. Third, by providing a linear programming formulation of the equivalent problem, we show the solvability of constrained optimal policies. Finally, we use two computable examples to illustrate our main results.


Full work available at URL: https://arxiv.org/abs/1201.0089




Recommendations




Cites Work


Cited In (29)





This page was built for publication: Discounted continuous-time constrained Markov decision processes in Polish spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q655591)