Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning

From MaRDI portal
Publication:6185586

DOI10.1080/01621459.2022.2096620arXiv2108.03706OpenAlexW3191746168MaRDI QIDQ6185586FDOQ6185586


Authors: Zhuoran Yang, Zhaoran Wang, Wei Sun, Guang Cheng Edit this on Wikidata


Publication date: 8 January 2024

Published in: Journal of the American Statistical Association (Search for Journal in Brave)

Abstract: The recent emergence of reinforcement learning has created a demand for robust statistical inference methods for the parameter estimates computed using these algorithms. Existing methods for statistical inference in online learning are restricted to settings involving independently sampled observations, while existing statistical inference methods in reinforcement learning (RL) are limited to the batch setting. The online bootstrap is a flexible and efficient approach for statistical inference in linear stochastic approximation algorithms, but its efficacy in settings involving Markov noise, such as RL, has yet to be explored. In this paper, we study the use of the online bootstrap method for statistical inference in RL. In particular, we focus on the temporal difference (TD) learning and Gradient TD (GTD) learning algorithms, which are themselves special instances of linear stochastic approximation under Markov noise. The method is shown to be distributionally consistent for statistical inference in policy evaluation, and numerical experiments are included to demonstrate the effectiveness of this algorithm at statistical inference tasks across a range of real RL environments.


Full work available at URL: https://arxiv.org/abs/2108.03706




Recommendations




Cites Work






This page was built for publication: Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6185586)