Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons
From MaRDI portal
Publication:6153987
DOI10.1080/01621459.2022.2106868arXiv2202.13163WikidataQ114898043 ScholiaQ114898043MaRDI QIDQ6153987FDOQ6153987
Authors: Chengchun Shi, Shikai Luo, Yuan le, Hongtu Zhu, Rui Song
Publication date: 19 March 2024
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Abstract: We consider reinforcement learning (RL) methods in offline domains without additional online data collection, such as mobile health applications. Most of existing policy optimization algorithms in the computer science literature are developed in online settings where data are easy to collect or simulate. Their generalizations to mobile health applications with a pre-collected offline dataset remain unknown. The aim of this paper is to develop a novel advantage learning framework in order to efficiently use pre-collected data for policy optimization. The proposed method takes an optimal Q-estimator computed by any existing state-of-the-art RL algorithms as input, and outputs a new policy whose value is guaranteed to converge at a faster rate than the policy derived based on the initial Q-estimator. Extensive numerical experiments are conducted to back up our theoretical findings. A Python implementation of our proposed method is available at https://github.com/leyuanheart/SEAL.
Full work available at URL: https://arxiv.org/abs/2202.13163
rate of convergenceadvantage learningreinforcement learninginfinite horizonsmobile health applications
Cites Work
- Personalized Policy Learning Using Longitudinal Mobile Health Data
- Title not available (Why is that?)
- A simple method for estimating interactions between a treatment and a large number of covariates
- Double/debiased machine learning for treatment and structural parameters
- Doubly Robust Estimation in Missing Data and Causal Inference Models
- Optimal global rates of convergence for nonparametric regression
- Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy
- Performance guarantees for individualized treatment rules
- Basic properties of strong mixing conditions. A survey and some open questions
- High-dimensional \(A\)-learning for optimal dynamic treatment regimes
- \({\mathcal Q}\)-learning
- Penalized Q-learning for dynamic treatment regimens
- Optimal Dynamic Treatment Regimes
- Inference for non-regular parameters in optimal dynamic treatment regimes
- Optimal Structural Nested Models for Optimal Sequential Decisions
- Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions
- \(Q\)- and \(A\)-learning methods for estimating optimal dynamic treatment regimes
- New statistical learning methods for estimating optimal dynamic treatment regimes
- Optimal aggregation of classifiers in statistical learning.
- Fast learning rates for plug-in classifiers
- Reinforcement learning. An introduction
- Interpretable dynamic treatment regimes
- Constructing dynamic treatment regimes over indefinite time horizons
- Doubly-robust dynamic treatment regimen estimation via weighted least squares
- Estimating dynamic treatment regimes in mobile health using V-learning
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Title not available (Why is that?)
- Quantile-optimal treatment regimes
- Multi-Armed Angle-Based Direct Learning for Estimating Optimal Individualized Treatment Rules With Various Outcomes
- Title not available (Why is that?)
- Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects
- Learning when-to-treat policies
- Greedy outcome weighted tree learning of optimal personalized treatment rules
This page was built for publication: Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6153987)