Distribution-based objectives for Markov decision processes

From MaRDI portal
Publication:5145274

DOI10.1145/3209108.3209185zbMATH Open1497.68355arXiv1804.09341OpenAlexW2964314311WikidataQ130958561 ScholiaQ130958561MaRDI QIDQ5145274FDOQ5145274

S. Akshay, Blaise Genest, Nikhil Vyas

Publication date: 20 January 2021

Published in: Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science (Search for Journal in Brave)

Abstract: We consider distribution-based objectives for Markov Decision Processes (MDP). This class of objectives gives rise to an interesting trade-off between full and partial information. As in full observation, the strategy in the MDP can depend on the state of the system, but similar to partial information, the strategy needs to account for all the states at the same time. In this paper, we focus on two safety problems that arise naturally in this context, namely, existential and universal safety. Given an MDP A and a closed and convex polytope H of probability distributions over the states of A, the existential safety problem asks whether there exists some distribution d in H and a strategy of A, such that starting from d and repeatedly applying this strategy keeps the distribution forever in H. The universal safety problem asks whether for all distributions in H, there exists such a strategy of A which keeps the distribution forever in H. We prove that both problems are decidable, with tight complexity bounds: we show that existential safety is PTIME-complete, while universal safety is co-NP-complete. Further, we compare these results with existential and universal safety problems for Rabin's probabilistic finite-state automata (PFA), the subclass of Partially Observable MDPs which have zero observation. Compared to MDPs, strategies of PFAs are not state-dependent. In sharp contrast to the PTIME result, we show that existential safety for PFAs is undecidable, with H having closed and open boundaries. On the other hand, it turns out that the universal safety for PFAs is decidable in EXPTIME, with a co-NP lower bound. Finally, we show that an alternate representation of the input polytope allows us to improve the complexity of universal safety for MDPs and PFAs.


Full work available at URL: https://arxiv.org/abs/1804.09341




Recommendations




Cited In (4)





This page was built for publication: Distribution-based objectives for Markov decision processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5145274)