MF-OMO: An Optimization Formulation of Mean-Field Games
From MaRDI portal
Publication:6188322
Abstract: Theory of mean-field games (MFGs) has recently experienced an exponential growth. Existing analytical approaches to find Nash equilibrium (NE) solutions for MFGs are, however, by and large restricted to contractive or monotone settings, or rely on the uniqueness of the NE. This paper proposes a new mathematical paradigm to analyze discrete-time MFGs without any of these restrictions. The key idea is to reformulate the problem of finding NE solutions in MFGs as solving an equivalent optimization problem, called MF-OMO, with bounded variables and trivial convex constraints. It is built on the classical work of reformulating a Markov decision process as a linear program, and by adding the consistency constraint for MFGs in terms of occupation measures, and by exploiting the complementarity structure of the linear program. This equivalence framework enables finding multiple (and possibly all) NE solutions of MFGs by standard algorithms such as projected gradient descent, and with convergence guarantees under appropriate conditions. In particular, analyzing MFGs with linear rewards and with mean-field independent dynamics is reduced to solving a finite number of linear programs, hence solvable in finite time. This optimization reformulation of MFGs can be extended to variants of MFGs such as personalized MFGs.
Recommendations
Cites work
- scientific article; zbMATH DE number 3912096 (Why is no real title available?)
- A Probabilistic Approach to Extended Finite State Mean Field Games
- Control and optimal stopping mean field games: a linear programming approach
- Existence of Markov Controls and Characterization of Optimal Markov Controls
- Fast projection onto the simplex and the \(l_1\) ball
- From the master equation to mean field game limit theory: a central limit theorem
- Generalized polynomial approximations in Markovian decision processes
- Infinite-Dimensional Linear Programming Approach to SingularStochastic Control
- Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle
- Linear Programming and Markov Decision Chains
- Linear Programming in a Markov Chain
- Linear programming algorithms for semi-Markovian decision processes
- Linear programming and sequential decisions
- Linear programming fictitious play algorithm for mean field games with optimal stopping and absorption
- Markov-Nash equilibria in mean-field games with discounted cost
- Mean field forward-backward stochastic differential equations
- Mean field games
- Mean-field backward stochastic differential equations: A limit approach
- Mean-field games of optimal stopping: a relaxed solution approach
- Occupation measures for controlled Markov processes: Characterization and optimality
- On Linear Programming in a Markov Decision Problem
- On sequential decisions and Markov chains
- Probabilistic theory of mean field games with applications II. Mean field games with common noise and master equations
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Q-learning in regularized mean-field games
- Stationary solutions and forward equations for controlled and singular martingale problems
- The LP approach in average reward MDPs with multiple cost constraints: The countable state case
- The Linear Programming Approach to Approximate Dynamic Programming
- The Master Equation and the Convergence Problem in Mean Field Games
- The master equation in mean field theory
- Time-average control of martingale problems: A linear programming formulation
- Unified reinforcement Q-learning for mean field game and control problems
This page was built for publication: MF-OMO: An Optimization Formulation of Mean-Field Games
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6188322)