Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons

From MaRDI portal
Publication:6153987

DOI10.1080/01621459.2022.2106868arXiv2202.13163WikidataQ114898043 ScholiaQ114898043MaRDI QIDQ6153987FDOQ6153987


Authors: Chengchun Shi, Shikai Luo, Yuan le, Hongtu Zhu, Rui Song Edit this on Wikidata


Publication date: 19 March 2024

Published in: Journal of the American Statistical Association (Search for Journal in Brave)

Abstract: We consider reinforcement learning (RL) methods in offline domains without additional online data collection, such as mobile health applications. Most of existing policy optimization algorithms in the computer science literature are developed in online settings where data are easy to collect or simulate. Their generalizations to mobile health applications with a pre-collected offline dataset remain unknown. The aim of this paper is to develop a novel advantage learning framework in order to efficiently use pre-collected data for policy optimization. The proposed method takes an optimal Q-estimator computed by any existing state-of-the-art RL algorithms as input, and outputs a new policy whose value is guaranteed to converge at a faster rate than the policy derived based on the initial Q-estimator. Extensive numerical experiments are conducted to back up our theoretical findings. A Python implementation of our proposed method is available at https://github.com/leyuanheart/SEAL.


Full work available at URL: https://arxiv.org/abs/2202.13163







Cites Work






This page was built for publication: Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6153987)