Off-policy evaluation in partially observed Markov decision processes under sequential ignorability

From MaRDI portal
Publication:6183750

DOI10.1214/23-AOS2287arXiv2110.12343MaRDI QIDQ6183750FDOQ6183750


Authors: Yu-Chen Hu, Stefan Wager Edit this on Wikidata


Publication date: 4 January 2024

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We consider off-policy evaluation of dynamic treatment rules under sequential ignorability, given an assumption that the underlying system can be modeled as a partially observed Markov decision process (POMDP). We propose an estimator, partial history importance weighting, and show that it can consistently estimate the stationary mean rewards of a target policy given long enough draws from the behavior policy. We provide an upper bound on its error that decays polynomially in the number of observations (i.e., the number of trajectories times their length), with an exponent that depends on the overlap of the target and behavior policies, and on the mixing time of the underlying system. Furthermore, we show that this rate of convergence is minimax given only our assumptions on mixing and overlap. Our results establish that off-policy evaluation in POMDPs is strictly harder than off-policy evaluation in (fully observed) Markov decision processes, but strictly easier than model-free off-policy evaluation.


Full work available at URL: https://arxiv.org/abs/2110.12343




Recommendations




Cites Work


Cited In (2)





This page was built for publication: Off-policy evaluation in partially observed Markov decision processes under sequential ignorability

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183750)