Estimating Optimal Infinite Horizon Dynamic Treatment Regimes via pT-Learning
From MaRDI portal
Publication:6154019
DOI10.1080/01621459.2022.2138760arXiv2110.10719MaRDI QIDQ6154019FDOQ6154019
Authors: Wenzhuo Zhou, Ruoqing Zhu, Annie Qu
Publication date: 19 March 2024
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Abstract: Recent advances in mobile health (mHealth) technology provide an effective way to monitor individuals' health statuses and deliver just-in-time personalized interventions. However, the practical use of mHealth technology raises unique challenges to existing methodologies on learning an optimal dynamic treatment regime. Many mHealth applications involve decision-making with large numbers of intervention options and under an infinite time horizon setting where the number of decision stages diverges to infinity. In addition, temporary medication shortages may cause optimal treatments to be unavailable, while it is unclear what alternatives can be used. To address these challenges, we propose a Proximal Temporal consistency Learning (pT-Learning) framework to estimate an optimal regime that is adaptively adjusted between deterministic and stochastic sparse policy models. The resulting minimax estimator avoids the double sampling issue in the existing algorithms. It can be further simplified and can easily incorporate off-policy data without mismatched distribution corrections. We study theoretical properties of the sparse policy and establish finite-sample bounds on the excess risk and performance error. The proposed method is provided in our proximalDTR package and is evaluated through extensive simulation studies and the OhioT1DM mHealth dataset.
Full work available at URL: https://arxiv.org/abs/2110.10719
Cites Work
- Introduction to empirical processes and semiparametric inference
- Title not available (Why is that?)
- Support Vector Machines
- A kernel two-sample test
- High-dimensional \(A\)-learning for optimal dynamic treatment regimes
- \({\mathcal Q}\)-learning
- Optimal Dynamic Treatment Regimes
- Title not available (Why is that?)
- A Bernstein type inequality and moderate deviations for weakly dependent sequences
- New statistical learning methods for estimating optimal dynamic treatment regimes
- Convexity, Classification, and Risk Bounds
- Positive definite functions and generalizations, an historical survey
- Exponential inequalities for the distributions of canonical U- and V-statistics of dependent observations
- Reinforcement learning. An introduction
- The Sequential Quadratic Programming Method
- Time bounds for selection
- Constructing dynamic treatment regimes over indefinite time horizons
- Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path
- Breaking the curse of dimensionality with convex neural networks
- Regularized policy iteration with nonparametric function spaces
- Policy evaluation with temporal differences: a survey and comparison
- An emphatic approach to the problem of off-policy temporal-difference learning
- Estimating dynamic treatment regimes in mobile health using V-learning
- Off-policy estimation of long-term average outcomes with applications to mobile health
- Weak convergence properties of constrained emphatic temporal-difference learning with constant and slowly diminishing stepsize
This page was built for publication: Estimating Optimal Infinite Horizon Dynamic Treatment Regimes via pT-Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6154019)