Myopic Quantal Response Policy: Thompson Sampling Meets Behavioral Economics

From MaRDI portal
Publication:6403850

arXiv2207.01028MaRDI QIDQ6403850FDOQ6403850


Authors: Jingying Ding, Yifan Feng, Ying Rong Edit this on Wikidata


Publication date: 3 July 2022

Abstract: We study a novel family of behavioral policies for the multi-armed bandit (MAB) problem, which we have termed Myopic Quantal Response (MQR). MQR prescribes a simple way to randomize over arms according to historical rewards and a "coefficient of exploitation," which explicitly manages the exploration-exploitation trade-off. MQR is a dynamic adaptation of quantal response models where the anticipated utilities are directly derived from past rewards. Furthermore, it can be viewed as a generalization of the Thompson Sampling (TS) algorithm. We develop an asymptotic theory for MQR and show how it can help understand not only asymptotically optimal policies like TS, but also those that are suboptimal due to "under" or "over" exploring. In the non-asymptotic setup, we demonstrate how MQR can be used as a structural estimation tool: Given observed data (i.e., realized actions and rewards), we can estimate the implied coefficient of exploitation of any given policy (either generated by human beings or algorithms). This allows us to diagnose whether and to what extent the policy underexplores or overexplores.













This page was built for publication: Myopic Quantal Response Policy: Thompson Sampling Meets Behavioral Economics

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6403850)