A Bandit Learning Method for Continuous Games under Feedback Delays with Residual Pseudo-Gradient Estimate
From MaRDI portal
Publication:6431228
arXiv2303.16433MaRDI QIDQ6431228FDOQ6431228
Authors: Yuanhanqing Huang, Jianghai Hu
Publication date: 28 March 2023
Abstract: Learning in multi-player games can model a large variety of practical scenarios, where each player seeks to optimize its own local objective function, which at the same time relies on the actions taken by others. Motivated by the frequent absence of first-order information such as partial gradients in solving local optimization problems and the prevalence of asynchronicity and feedback delays in multi-agent systems, we introduce a bandit learning algorithm, which integrates mirror descent, residual pseudo-gradient estimates, and the priority-based feedback utilization strategy, to contend with these challenges. We establish that for pseudo-monotone plus games, the actual sequences of play generated by the proposed algorithm converge a.s. to critical points. Compared with the existing method, the proposed algorithm yields more consistent estimates with less variation and allows for more aggressive choices of parameters. Finally, we illustrate the validity of the proposed algorithm through a thermal load management problem of building complexes.
This page was built for publication: A Bandit Learning Method for Continuous Games under Feedback Delays with Residual Pseudo-Gradient Estimate
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6431228)