A Tauberian Theorem for Nonexpansive Operators and Applications to Zero-Sum Stochastic Games
From MaRDI portal
Publication:2833116
DOI10.1287/moor.2016.0788zbMath1369.47069arXiv1501.06525OpenAlexW1769644984MaRDI QIDQ2833116
Publication date: 16 November 2016
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1501.06525
nonexpansive operatorsrepeated gamesTauberian theoremstochastic gamesasymptotic valuestochastic games with signals
Contraction-type mappings, nonexpansive mappings, (A)-proper mappings, etc. (47H09) Stochastic games, stochastic differential games (91A15) Dynamic games (91A25) Multistage and repeated games (91A20) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items
Recursive games: uniform value, Tauberian theorem and the Mertens conjecture ``\(\mathrm{Maxmin}=\lim v_n=\lim v_\lambda\), Tauberian theorem for value functions, Tauberian theorems for general iterations of operators: applications to zero-sum stochastic games, A uniform Tauberian theorem in dynamic games, Generic uniqueness of the bias vector of finite zero-sum stochastic games with perfect information, Recursive utility and parameter uncertainty, On Tauberian theorem for stationary Nash equilibria, An accretive operator approach to ergodic zero-sum stochastic games, Asymptotic value in frequency-dependent games with separable payoffs: a differential approach, Asymptotics of values in dynamic games on large intervals, Acyclic Gambling Games, Tauberian theorem for games with unbounded running cost, Communicating zero-sum product stochastic games, A formula for the value of a stochastic game
Cites Work
- Recursive games: uniform value, Tauberian theorem and the Mertens conjecture ``\(\mathrm{Maxmin}=\lim v_n=\lim v_\lambda\)
- Zero-sum repeated games: counterexamples to the existence of the asymptotic value and the conjecture \({\max}{\min}=\lim v_{n}\)
- A zero-sum stochastic game with compact action sets and no asymptotic value
- Existence of the uniform value in zero-sum repeated games with a more informed controller
- Uniform value in dynamic programming
- Existence of optimal strategies in Markov games with incomplete information
- Universally measurable strategies in zero-sum stochastic games
- Asymptotic behavior of nonexpansive mappings in normed linear spaces
- Blackwell optimality in Markov decision processes with partial observation.
- Uniform approximation of trajectories maximal to the right under the condition of asymptotic integral stability
- On stochastic games
- The Value of Repeated Games with an Informed Controller
- A Uniform Tauberian Theorem in Optimal Control
- Absorbing Games with Compact Action Spaces
- The Value of Markov Chain Games with Incomplete Information on Both Sides
- A Uniform Tauberian Theorem in Dynamic Programming
- The Asymptotic Theory of Stochastic Games
- Zero Sum Absorbing Games with Incomplete Information on One Side: Asymptotic Analysis
- Stochastic Games with a Single Controller and Incomplete Information
- Commutative Stochastic Games
- The Value of Markov Chain Games with Lack of Information on One Side
- Stochastic Games
- Stochastic games
- An operator approach to zero-sum repeated games
- A first course on zero-sum repeated games