Convergence Rate of Learning a Strongly Variationally Stable Equilibrium

From MaRDI portal
Publication:6509526

arXiv2304.02355MaRDI QIDQ6509526FDOQ6509526


Authors: Tatiana Tatarenko, Maryam Kamgarpour Edit this on Wikidata



Abstract: We derive the rate of convergence to the strongly variationally stable Nash equilibrium in a convex game, for a zeroth-order learning algorithm. We consider both one-point and two-point feedback, and the standard assumption of convexity of the game. Though we do not assume strong monotonicity of the game, our rates for the one-point feedback, O(Nd/t^.5), and for the two-point feedback, O((Nd)^2/t), match the best rates known for strongly monotone games.













This page was built for publication: Convergence Rate of Learning a Strongly Variationally Stable Equilibrium

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6509526)