Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning
From MaRDI portal
Publication:6185586
DOI10.1080/01621459.2022.2096620arXiv2108.03706OpenAlexW3191746168MaRDI QIDQ6185586
Unnamed Author, Zhaoran Wang, Wei Sun, Unnamed Author, Guang Cheng, Zhuoran Yang
Publication date: 8 January 2024
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2108.03706
asymptotic normalitystochastic approximationstatistical inferencereinforcement learningmultiplier bootstrap
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Trajectory averaging for stochastic approximation MCMC algorithms
- Challenges of real-world reinforcement learning: definitions, benchmarks and analysis
- Statistical inference for model parameters in stochastic gradient descent
- Moment Consistency of the Exchangeably Weighted Bootstrap for Semiparametric M-estimation
- Markov Chains and Stochastic Stability
- Acceleration of Stochastic Approximation by Averaging
- An analysis of temporal-difference learning with function approximation
- Constructing dynamic treatment regimes over indefinite time horizons
- Markov Chains
- 10.1162/1532443041827907
- Statistical Inference for Online Decision Making via Stochastic Gradient Descent
- Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning
- Estimating Dynamic Treatment Regimes in Mobile Health Using V-Learning
- Inference and uncertainty quantification for noisy matrix completion
- Stability of Stochastic Approximation under Verifiable Conditions
- A Stochastic Approximation Method
- The bootstrap and Edgeworth expansion
- Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement Learning Framework
This page was built for publication: Online Bootstrap Inference For Policy Evaluation In Reinforcement Learning