Penalty-regulated dynamics and robust learning procedures in games

From MaRDI portal
Publication:3449451

DOI10.1287/MOOR.2014.0687zbMATH Open1377.91033arXiv1303.2270OpenAlexW2116718598WikidataQ60142071 ScholiaQ60142071MaRDI QIDQ3449451FDOQ3449451


Authors: Pierre Coucheney, Bruno Gaujal, Panayotis Mertikopoulos Edit this on Wikidata


Publication date: 4 November 2015

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Abstract: Starting from a heuristic learning scheme for N-person games, we derive a new class of continuous-time learning dynamics consisting of a replicator-like drift adjusted by a penalty term that renders the boundary of the game's strategy space repelling. These penalty-regulated dynamics are equivalent to players keeping an exponentially discounted aggregate of their on-going payoffs and then using a smooth best response to pick an action based on these performance scores. Owing to this inherent duality, the proposed dynamics satisfy a variant of the folk theorem of evolutionary game theory and they converge to (arbitrarily precise) approximations of Nash equilibria in potential games. Motivated by applications to traffic engineering, we exploit this duality further to design a discrete-time, payoff-based learning algorithm which retains these convergence properties and only requires players to observe their in-game payoffs: moreover, the algorithm remains robust in the presence of stochastic perturbations and observation errors, and it does not require any synchronization between players.


Full work available at URL: https://arxiv.org/abs/1303.2270




Recommendations




Cites Work


Cited In (20)





This page was built for publication: Penalty-regulated dynamics and robust learning procedures in games

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3449451)