Delay-Adaptive Learning in Generalized Linear Contextual Bandits

From MaRDI portal
Revision as of 08:07, 10 July 2024 by Import240710060729 (talk | contribs) (Created automatically from import240710060729)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:6189908

DOI10.1287/MOOR.2023.1358arXiv2003.05174OpenAlexW3010924289MaRDI QIDQ6189908

Renyuan Xu, Zhengyuan Zhou, Jose H. Blanchet

Publication date: 5 March 2024

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Abstract: In this paper, we consider online learning in generalized linear contextual bandits where rewards are not immediately observed. Instead, rewards are available to the decision-maker only after some delay, which is unknown and stochastic. We study the performance of two well-known algorithms adapted to this delayed setting: one based on upper confidence bounds, and the other based on Thompson sampling. We describe modifications on how these two algorithms should be adapted to handle delays and give regret characterizations for both algorithms. Our results contribute to the broad landscape of contextual bandits literature by establishing that both algorithms can be made to be robust to delays, thereby helping clarify and reaffirm the empirical success of these two algorithms, which are widely deployed in modern recommendation engines.


Full work available at URL: https://arxiv.org/abs/2003.05174










This page was built for publication: Delay-Adaptive Learning in Generalized Linear Contextual Bandits