Maximizing the probability of visiting a set infinitely often for a countable state space Markov decision process
From MaRDI portal
Publication:2235986
DOI10.1016/j.jmaa.2021.125639zbMath1479.90211OpenAlexW3197076938MaRDI QIDQ2235986
Tomás Prieto-Rumeau, François Dufour
Publication date: 22 October 2021
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmaa.2021.125639
Markov decision processcountable state spacenon-additive optimality criterionvisiting set infinitely often
Cites Work
- On compactness of the space of policies in stochastic dynamic programming
- On dynamic programming: Compactness of the space of policies
- On an extremal property of Markov chains and sufficiency of Markov strategies in Markov decision processes with the Dubins-Savage criterion
- Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes
- Stationary Policies in Dynamic Programming Models Under Compactness Assumptions
- The Decomposition-Separation Theorem for Finite Nonhomogeneous Markov Chains and Related Problems
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- More Risk-Sensitive Markov Decision Processes
- On the chance to visit a goal set infinitely often
- Characterization of the Optimal Risk-Sensitive Average Cost in Denumerable Markov Decision Chains
- Continuous-Time Markov Decision Processes with Exponential Utility
This page was built for publication: Maximizing the probability of visiting a set infinitely often for a countable state space Markov decision process