Continuous-time convergence rates in potential and monotone games
From MaRDI portal
Publication:5081641
Abstract: In this paper, we provide exponential rates of convergence to the interior Nash equilibrium for continuous-time dual-space game dynamics such as mirror descent (MD) and actor-critic (AC). We perform our analysis in -player continuous concave games that satisfy certain monotonicity assumptions while possibly also admitting potential functions. In the first part of this paper, we provide a novel relative characterization of monotone games and show that MD and its discounted version converge with in relatively strongly and relatively hypo-monotone games, respectively. In the second part of this paper, we specialize our results to games that admit a relatively strongly concave potential and show AC converges with . These rates extend their known convergence conditions. Simulations are performed which empirically back up our results.
Recommendations
- On the rate of convergence of continuous-time fictitious play
- On best-response dynamics in potential games
- Generalized mirror descents with non-convex potential functions in atomic congestion games: continuous time and discrete time
- Continuous time learning algorithms in optimization and game theory
- The rate of convergence of continuous fictitious play
Cites work
- scientific article; zbMATH DE number 5869530 (Why is no real title available?)
- scientific article; zbMATH DE number 3653840 (Why is no real title available?)
- scientific article; zbMATH DE number 1243371 (Why is no real title available?)
- scientific article; zbMATH DE number 903638 (Why is no real title available?)
- A Passivity-Based Approach to Nash Equilibrium Seeking Over Networks
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Continuous-Time Discounted Mirror Descent Dynamics in Monotone Concave Games
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Convergent multiple-timescales reinforcement learning algorithms in normal form games
- Cycles in adversarial regularized learning
- Distributed Nash Equilibrium Seeking by a Consensus Based Approach
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Dynamic fictitious play, dynamic gradient play, and distributed convergence to Nash equilibria
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- First-order methods in optimization
- Hedging under uncertainty: regret minimization meets exponentially fast convergence
- Higher order game dynamics
- Information geometry and its applications
- Learning in games via reinforcement and regularization
- Machine learning. A probabilistic perspective
- Mixed-Strategy Learning With Continuous Action Sets
- Nash Equilibrium Seeking in Noncooperative Games
- On Passivity, Reinforcement Learning, and Higher Order Learning in Multiagent Finite Games
- On best-response dynamics in potential games
- Optimization methods for large-scale machine learning
- Penalty-regulated dynamics and robust learning procedures in games
- Relatively smooth convex optimization by first-order methods, and applications
- Saddle-point dynamics: conditions for asymptotic stability of saddle points
- The approximate duality gap technique: a unified theory of first-order methods
Cited in
(2)
This page was built for publication: Continuous-time convergence rates in potential and monotone games
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5081641)