Experimental design for partially observed Markov decision processes

From MaRDI portal
Publication:3176233

DOI10.1137/16M1084924zbMATH Open1391.90634arXiv1209.4019OpenAlexW2962776633WikidataQ130014496 ScholiaQ130014496MaRDI QIDQ3176233FDOQ3176233


Authors: Leifur Thorbergsson, Giles Hooker Edit this on Wikidata


Publication date: 19 July 2018

Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)

Abstract: This paper deals with the question of how to most effectively conduct experiments in Partially Observed Markov Decision Processes so as to provide data that is most informative about a parameter of interest. Methods from Markov decision processes, especially dynamic programming, are introduced and then used in an algorithm to maximize a relevant Fisher Information. The algorithm is then applied to two POMDP examples. The methods developed can also be applied to stochastic dynamical systems, by suitable discretization, and we consequently show what control policies look like in the Morris-Lecar Neuron model, and simulation results are presented. We discuss how parameter dependence within these methods can be dealt with by the use of priors, and develop tools to update control policies online. This is demonstrated in another stochastic dynamical system describing growth dynamics of DNA template in a PCR model.


Full work available at URL: https://arxiv.org/abs/1209.4019




Recommendations




Cites Work


Cited In (8)





This page was built for publication: Experimental design for partially observed Markov decision processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3176233)