Asymptotic Perron's method and simple Markov strategies in stochastic games and control
From MaRDI portal
Publication:5501201
Abstract: We introduce a modification of Perron's method, where semi-solutions are considered in a carefully defined asymptotic sense. With this definition, we can show, in a rather elementary way, that in a zero-sum game or a control problem (with or without model uncertainty), the value function over all strategies coincides with the value function over Markov strategies discretized in time. Therefore, there are always discretized Markov -optimal strategies, (uniform with respect to the bounded initial condition). With a minor modification, the method produces a value and approximate saddle points for an asymmetric game of feedback strategies vs. counter-strategies.
Recommendations
- Stochastic Perron's method and elementary strategies for zero-sum differential games
- Asymptotically optimal strategies for adaptive zero-sum discounted Markov games
- scientific article; zbMATH DE number 4031449
- On the value of stochastic differential games
- Asymptotic behavior of continuous stochastic games
Cites work
- scientific article; zbMATH DE number 4205918 (Why is no real title available?)
- scientific article; zbMATH DE number 192835 (Why is no real title available?)
- scientific article; zbMATH DE number 192908 (Why is no real title available?)
- scientific article; zbMATH DE number 4125214 (Why is no real title available?)
- A note on the strong formulation of stochastic control problems with model uncertainty
- Another approach to the existence of value functions of stochastic differential games
- Controlled diffusion processes. Translated by A. B. Aries
- On martingale problems with continuous-time mixing and values of zero-sum games without the Isaacs condition
- On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients
- On the value of stochastic differential games
- Optimal investment with high-watermark performance fee
- Perron's method for Hamilton-Jacobi equations
- Stochastic Perron's method and elementary strategies for zero-sum differential games
- Stochastic Perron's method and verification without smoothness using viscosity comparison: obstacle problems and Dynkin games
- Stochastic Perron's method and verification without smoothness using viscosity comparison: the linear case
- Stochastic Perron's method for Hamilton-Jacobi-Bellman equations
- Stochastic target games and dynamic programming via regularized viscosity solutions
- Strategies for differential games
- Sub- and superoptimality principles of dynamic programming revisited
- Two person zero-sum game in weak formulation and path dependent Bellman-Isaacs equation
- Value in mixed strategies for zero-sum stochastic differential games without Isaacs condition
- Values in differential games
Cited in
(6)- Feedback Stackelberg-Nash equilibria in mixed leadership games with an application to cooperative advertising
- Zero-sum stochastic differential games without the Isaacs condition: random rules of priority and intermediate Hamiltonians
- Zero-sum path-dependent stochastic differential games in weak formulation
- Stochastic Perron's method and elementary strategies for zero-sum differential games
- Dynamic programming principle for classical and singular stochastic control with discretionary stopping
- A general verification result for stochastic impulse control problems
This page was built for publication: Asymptotic Perron's method and simple Markov strategies in stochastic games and control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5501201)