A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets
From MaRDI portal
Publication:6138596
DOI10.1214/22-AOAS1700MaRDI QIDQ6138596FDOQ6138596
Authors: Chengchun Shi, Runzhe Wan, Ge Song, Shikai Luo, Hongtu Zhu, Rui Song
Publication date: 16 January 2024
Published in: The Annals of Applied Statistics (Search for Journal in Brave)
Abstract: The two-sided markets such as ride-sharing companies often involve a group of subjects who are making sequential decisions across time and/or location. With the rapid development of smart phones and internet of things, they have substantially transformed the transportation landscape of human beings. In this paper we consider large-scale fleet management in ride-sharing companies that involve multiple units in different areas receiving sequences of products (or treatments) over time. Major technical challenges, such as policy evaluation, arise in those studies because (i) spatial and temporal proximities induce interference between locations and times; and (ii) the large number of locations results in the curse of dimensionality. To address both challenges simultaneously, we introduce a multi-agent reinforcement learning (MARL) framework for carrying policy evaluation in these studies. We propose novel estimators for mean outcomes under different products that are consistent despite the high-dimensionality of state-action space. The proposed estimator works favorably in simulation experiments. We further illustrate our method using a real dataset obtained from a two-sided marketplace company to evaluate the effects of applying different subsidizing policies. A Python implementation of our proposed method is available at https://github.com/RunzheStat/CausalMARL.
Full work available at URL: https://arxiv.org/abs/2202.10574
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A robust method for estimating optimal treatment regimes
- Basic properties of strong mixing conditions. A survey and some open questions
- Batch policy learning in average reward Markov decision processes
- Bayesian method for causal inference in spatially-correlated multivariate time series
- Causal inference for statistics, social, and biomedical sciences. An introduction
- Doubly robust policy evaluation and optimization
- Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score
- Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning
- Estimating dynamic treatment regimes in mobile health using V-learning
- Estimating individualized treatment rules using outcome weighted learning
- Estimation and Inference of Heterogeneous Treatment Effects using Random Forests
- Evaluating marker-guided treatment selection strategies
- Exact \(p\)-values for network interference
- Greedy outcome weighted tree learning of optimal personalized treatment rules
- High-dimensional \(A\)-learning for optimal dynamic treatment regimes
- Inference for non-regular parameters in optimal dynamic treatment regimes
- Interpretable dynamic treatment regimes
- Learning optimal distributionally robust individualized treatment rules
- Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects
- Multi-agent reinforcement learning: a selective overview of theories and algorithms
- New statistical learning methods for estimating optimal dynamic treatment regimes
- Off-policy estimation of long-term average outcomes with applications to mobile health
- Optimal Dynamic Treatment Regimes
- Optimal Structural Nested Models for Optimal Sequential Decisions
- Penalized Q-learning for dynamic treatment regimens
- Performance guarantees for individualized treatment rules
- Personalized Policy Learning Using Longitudinal Mobile Health Data
- Program evaluation and causal inference with high-dimensional data
- Quantile-optimal treatment regimes
- Regularized policy iteration with nonparametric function spaces
- Reinforcement learning. An introduction
- Resampling‐based confidence intervals for model‐free robust inference on optimal treatment regimes
- Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions
- Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy
- The stratified micro-randomized trial design: sample size considerations for testing nested causal effects of time-varying treatments
- Time series experiments and causal estimands: exact randomization tests and trading
- Toward Causal Inference With Interference
- Using decision lists to construct interpretable and parsimonious treatment regimes
This page was built for publication: A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6138596)