Dynamic Programming for POMDP with Jointly Discrete and Continuous State-Spaces

From MaRDI portal
Publication:6299488

arXiv1803.08876MaRDI QIDQ6299488FDOQ6299488


Authors: Dong Hwan Lee, Niao He, Jianghai Hu Edit this on Wikidata


Publication date: 23 March 2018

Abstract: In this work, we study dynamic programming (DP) algorithms for partially observable Markov decision processes with jointly continuous and discrete state-spaces. We consider a class of stochastic systems which have coupled discrete and continuous systems, where only the continuous state is observable. Such a family of systems includes many real world systems, for example, Markovian jump linear systems and physical systems interacting with humans. A finite history of observations is used as a new information state, and the convergence of the corresponding DP algorithms is proved. In particular, we prove that the DP iterations converge to a certain bounded set around an optimal solution. Although deterministic DP algorithms are studied in this paper, it is expected that this fundamental work lays foundations for advanced studies on reinforcement learning algorithms under the same family of systems.













This page was built for publication: Dynamic Programming for POMDP with Jointly Discrete and Continuous State-Spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6299488)