No-regret learning for repeated non-cooperative games with lossy bandits

From MaRDI portal
Publication:6152576

DOI10.1016/J.AUTOMATICA.2023.111455arXiv2205.06968MaRDI QIDQ6152576FDOQ6152576


Authors: Wenting Liu, Jinlong Lei, Peng Yi, Yiguang Hong Edit this on Wikidata


Publication date: 13 February 2024

Published in: Automatica (Search for Journal in Brave)

Abstract: This paper considers no-regret learning for repeated continuous-kernel games with lossy bandit feedback. Since it is difficult to give the explicit model of the utility functions in dynamic environments, the players' action can only be learned with bandit feedback. Moreover, because of unreliable communication channels or privacy protection, the bandit feedback may be lost or dropped at random. Therefore, we study the asynchronous online learning strategy of the players to adaptively adjust the next actions for minimizing the long-term regret loss. The paper provides a novel no-regret learning algorithm, called Online Gradient Descent with lossy bandits (OGD-lb). We first give the regret analysis for concave games with differentiable and Lipschitz utilities. Then we show that the action profile converges to a Nash equilibrium with probability 1 when the game is also strictly monotone. We further provide the mean square convergence rate when the game is strongly monotone. In addition, we extend the algorithm to the case when the loss probability of the bandit feedback is unknown, and prove its almost sure convergence to Nash equilibrium for strictly monotone games. Finally, we take the resource management in fog computing as an application example, and carry out numerical experiments to empirically demonstrate the algorithm performance.


Full work available at URL: https://arxiv.org/abs/2205.06968




Recommendations




Cites Work






This page was built for publication: No-regret learning for repeated non-cooperative games with lossy bandits

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6152576)