Delay-Adaptive Learning in Generalized Linear Contextual Bandits

From MaRDI portal
Publication:6189908

DOI10.1287/MOOR.2023.1358arXiv2003.05174OpenAlexW3010924289MaRDI QIDQ6189908FDOQ6189908


Authors: Jose Blanchet, Renyuan Xu, Zhengyuan Zhou Edit this on Wikidata


Publication date: 5 March 2024

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Abstract: In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision-maker only after some delay, which is unknown and stochastic. We study the performance of two well-known algorithms adapted to this delayed setting: one based on upper confidence bounds, and the other based on Thompson sampling. We describe modifications on how these two algorithms should be adapted to handle delays and give regret characterizations for both algorithms. Our results contribute to the broad landscape of contextual bandits literature by establishing that both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the empirical success of these two algorithms, which are widely deployed in modern recommendation engines.


Full work available at URL: https://arxiv.org/abs/2003.05174




Recommendations





Cited In (1)





This page was built for publication: Delay-Adaptive Learning in Generalized Linear Contextual Bandits

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6189908)