Exploratory LQG mean field games with entropy regularization
From MaRDI portal
Publication:2116646
Abstract: We study a general class of entropy-regularized multi-variate LQG mean field games (MFGs) in continuous time with distinct sub-population of agents. We extend the notion of actions to action distributions (exploratory actions), and explicitly derive the optimal action distributions for individual agents in the limiting MFG. We demonstrate that the optimal set of action distributions yields an -Nash equilibrium for the finite-population entropy-regularized MFG. Furthermore, we compare the resulting solutions with those of classical LQG MFGs and establish the equivalence of their existence.
Recommendations
- Entropy Regularization for Mean Field Games with Learning
- Explicit solutions of some linear-quadratic mean field games
- \(\epsilon\)-Nash mean-field games for general linear-quadratic systems with applications
- LQG mean field games with a major agent: Nash certainty equivalence versus probabilistic approach
- Q-learning in regularized mean-field games
Cites work
- scientific article; zbMATH DE number 7307478 (Why is no real title available?)
- $\epsilon$-Nash Equilibria for Major–Minor LQG Mean Field Games With Partial Observations of All Agents
- A Mean-Field Game of Evacuation in Multilevel Building
- A framework for robust quadratic optimal control with parametric dynamic model uncertainty using polynomial chaos
- An alternative approach to mean field game with major and minor players, and applications to herders impacts
- An integral control formulation of mean field game based large scale coordination of loads in smart grids
- Continuous‐time mean–variance portfolio selection: A reinforcement learning framework
- Convex analysis for LQG systems with applications to major-minor LQG mean-field game systems
- Entropy penalization methods for Hamilton-Jacobi equations
- Exponential convergence and stability of Howard's policy improvement algorithm for controlled diffusions
- Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle
- Large-Population Cost-Coupled LQG Problems With Nonuniform Agents: Individual-Mass Behavior and Decentralized $\varepsilon$-Nash Equilibria
- Large-population LQG games involving a major player: the Nash certainty equivalence principle
- Learning in Mean-Field Games
- Linear-quadratic mean field Stackelberg games with state and control delays
- Linear-quadratic-Gaussian mean-field-game with partial observation and common noise
- Mean field game theory with a partially observed major agent
- Mean field games
- Mean field games and mean field type control theory
- Mean field games and systemic risk
- Mean field games. I: The stationary case
- Mean field games. II: Finite horizon and optimal control
- Mean-field controls with Q-learning for cooperative MARL: convergence and complexity analysis
- Mean-field games of optimal stopping: a relaxed solution approach
- Mean-field games with a major player
- Mean-field games with differing beliefs for algorithmic trading
- Probabilistic theory of mean field games with applications I. Mean field FBSDEs, control, and games
- Remarks on Nash equilibria in mean field game models with a major player
- The Master Equation and the Convergence Problem in Mean Field Games
- The Principle of Maximum Causal Entropy for Estimating Interacting Processes
- The Variational Formulation of the Fokker--Planck Equation
- The execution problem in finance with major and minor traders: a mean field game formulation
- Value iteration algorithm for mean-field games
- \(\epsilon\)-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents
Cited in
(13)- A class of hybrid LQG mean field games with state-invariant switching and stopping strategies
- Reinforcement learning for continuous-time mean-variance portfolio selection in a regime-switching market
- Dual stochastic descriptions of streamflow dynamics under model ambiguity through a Markovian embedding
- Anderson acceleration for partially observable Markov decision processes: a maximum entropy approach
- Q-learning in regularized mean-field games
- Exploratory Control with Tsallis Entropy for Latent Factor Models
- Convergence of policy gradient methods for finite-horizon exploratory linear-quadratic control problems
- Recent developments in machine learning methods for stochastic control and games
- Optimal Scheduling of Entropy Regularizer for Continuous-Time Linear-Quadratic Reinforcement Learning
- Entropy Regularization for Mean Field Games with Learning
- Reinforcement learning for exploratory linear-quadratic two-person zero-sum stochastic differential games
- Deep Q-Learning for Nash Equilibria: Nash-DQN
- Exploratory HJB equations and their convergence
This page was built for publication: Exploratory LQG mean field games with entropy regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2116646)