Bandit Theory: Applications to Learning Healthcare Systems and Clinical Trials
From MaRDI portal
Publication:5072150
DOI10.5705/ss.202020.0431OpenAlexW3126340951MaRDI QIDQ5072150
Michael Sklar, Philip W. Lavori, Mei-Chiung Shih
Publication date: 25 April 2022
Published in: Statistica Sinica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.5705/ss.202020.0431
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-armed bandit models for the optimal design of clinical trials: benefits and challenges
- Using randomization tests to preserve type I error with response adaptive and covariate adaptive randomization
- Asymptotically efficient adaptive allocation rules
- Kernel-based reinforcement learning
- Pure exploration in finitely-armed and continuous-armed bandits
- Permutation methods: a basis for exact inference
- Sequential methods for comparative effectiveness experiments: point of care cliical trials
- The Knowledge Gradient Algorithm for a General Class of Online Learning Problems
- Thompson Sampling: An Asymptotically Optimal Finite-Time Analysis
- Power, sample size and adaptation considerations in the design of group sequential clinical trials
- Confidence intervals in group sequential trials with random group sizes and applications to survival analysis
- The Markov chain Monte Carlo revolution
- On the inefficiency of the adaptive design for monitoring clinical trials
- Efficient Adaptive Randomization and Stopping Rules in Multi-arm Clinical Trials for Testing a New Treatment
- Adaptive Treatment Assignment in Experiments for Policy Choice
- Online Decision Making with High-Dimensional Covariates
- A linear response bandit problem
- Learning to Optimize via Posterior Sampling
- Optimal adaptive randomized designs for clinical trials
- Finite-time analysis of the multiarmed bandit problem
This page was built for publication: Bandit Theory: Applications to Learning Healthcare Systems and Clinical Trials