Projected state-action balancing weights for offline reinforcement learning

From MaRDI portal
Publication:6183753

DOI10.1214/23-AOS2302arXiv2109.04640MaRDI QIDQ6183753FDOQ6183753


Authors: Jiayi Wang, Zhengling Qi, Raymond Wong Edit this on Wikidata


Publication date: 4 January 2024

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Offline policy evaluation (OPE) is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on the value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights and show that the proposed value estimator is semi-parametric efficient under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the Bellman operator in the off-policy setting, which characterizes the difficulty of OPE and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.


Full work available at URL: https://arxiv.org/abs/2109.04640







Cites Work






This page was built for publication: Projected state-action balancing weights for offline reinforcement learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183753)