Tauberian theorem for value functions
From MaRDI portal
Abstract: For two-person dynamic zero-sum games (both discrete and continuous settings), we investigate the limit of value functions of finite horizon games with long run average cost as the time horizon tends to infinity and the limit of value functions of -discounted games as the discount tends to zero. We prove that the Dynamic Programming Principle for value functions directly leads to the Tauberian Theorem---that the existence of a uniform limit of the value functions for one of the families implies that the other one also uniformly converges to the same limit. No assumptions on strategies are necessary. To this end, we consider a mapping that takes each payoff to the corresponding value function and preserves the sub- and super- optimality principles (the Dynamic Programming Principle). With their aid, we obtain certain inequalities on asymptotics of sub- and super- solutions, which lead to the Tauberian Theorem. In particular, we consider the case of differential games without relying on the existence of the saddle point; a very simple stochastic game model is also considered.
Recommendations
Cites work
- scientific article; zbMATH DE number 5604590 (Why is no real title available?)
- scientific article; zbMATH DE number 43570 (Why is no real title available?)
- scientific article; zbMATH DE number 51863 (Why is no real title available?)
- scientific article; zbMATH DE number 3468574 (Why is no real title available?)
- scientific article; zbMATH DE number 3211396 (Why is no real title available?)
- scientific article; zbMATH DE number 3349081 (Why is no real title available?)
- scientific article; zbMATH DE number 3049368 (Why is no real title available?)
- A Tauberian theorem for nonexpansive operators and applications to zero-sum stochastic games
- A Uniform Tauberian Theorem in Dynamic Programming
- A century of complex Tauberian theory
- A uniform Tauberian theorem in optimal control
- A zero-sum stochastic game with compact action sets and no asymptotic value
- Asymptotic properties in dynamic programming
- Axiomatic approach in differential games
- Book review of: L. Gawarecki and V. Mandrekar, Stochastic differential equations in infinite dimensions with applications to stochastic partial differential equations
- Discrete Dynamic Programming
- Ergodic Problems in Differential Games
- Ergodic problem for the Hamilton-Jacobi-Bellman equation. II
- General limit value in dynamic programming
- General limit value in zero-sum stochastic games
- Infinite time optimal control and periodicity
- Limit theory for controlled McKean-Vlasov dynamics
- Limit value for optimal control with general means
- On Differential Games with Long-Time-Average Cost
- On ergodic stochastic control
- On sets of occupational measures generated by a deterministic control system on an infinite time horizon
- On the Large Time Behavior of Solutions of Hamilton--Jacobi Equations
- On the existence of a limit value in some nonexpansive optimal control problems
- On the relation between discounted and average optimal value functions
- Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Recursive games: uniform value, Tauberian theorem and the Mertens conjecture ``\(\mathrm{Maxmin}=\lim v_n=\lim v_\lambda\)
- Some recent aspects of differential game theory
- Stochastic games
- Sur la convergence du semi-groupe de Lax-Oleinik
- The Asymptotic Theory of Stochastic Games
- The existence of value in differential games
- Uniform Tauberian theorem in differential games
- Uniform value in dynamic programming
- Vanishing Discount Limit and Nonexpansive Optimal Control and Differential Games
- Zero-sum repeated games: recent advances and new links with differential games
Cited in
(13)- LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems
- Uniform Tauberian theorem in differential games
- scientific article; zbMATH DE number 978454 (Why is no real title available?)
- Tauberian theorem for games with unbounded running cost
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Unique ergodicity of deterministic zero-sum differential games
- Tauberian theorems for general iterations of operators: applications to zero-sum stochastic games
- On asymptotic value for dynamic games with saddle point
- Asymptotics of values in dynamic games on large intervals
- Linear programming estimates for Cesàro and Abel limits of optimal values in optimal control problems
- LP-related representations of Cesàro and Abel limits of optimal value functions
- On Tauberian theorem for stationary Nash equilibria
- A Tauberian theorem for nonexpansive operators and applications to zero-sum stochastic games
This page was built for publication: Tauberian theorem for value functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1649024)