Limit value for optimal control with general means
From MaRDI portal
Abstract: We consider optimal control problem with an integral cost which is a mean of a given function. As a particular case, the cost concerned is the Ces`aro average. The limit of the value with Ces`aro mean when the horizon tends to infinity is widely studied in the literature. We address the more general question of the existence of a limit when the averaging parameter converges, for values defined with means of general types. We consider a given function and a family of costs defined as the mean of the function with respect to a family of probability measures -- the evaluations -- on R_+. We provide conditions on the evaluations in order to obtain the uniform convergence of the associated value function (when the parameter of the family converges). Our main result gives a necessary and sufficient condition in term of the total variation of the family of probability measures on R_+. As a byproduct, we obtain the existence of a limit value (for general means) for control systems having a compact invariant set and satisfying suitable nonexpansive property.
Recommendations
- On representation formulas for long run averaging optimal control problem
- On the existence of a limit value in some nonexpansive optimal control problems
- LP-related representations of Cesàro and Abel limits of optimal value functions
- Existence of asymptotic values for nonexpansive stochastic control systems
- scientific article; zbMATH DE number 124641
Cites work
- scientific article; zbMATH DE number 3944646 (Why is no real title available?)
- scientific article; zbMATH DE number 193190 (Why is no real title available?)
- scientific article; zbMATH DE number 3366247 (Why is no real title available?)
- A first course on zero-sum repeated games
- A note on general Tauberian-type results for controlled stochastic dynamics
- A uniform Tauberian theorem in optimal control
- Ergodic problem for the Hamilton-Jacobi-Bellman equation. I: Existence of the ergodic attractor
- Ergodicity, stabilization, and singular perturbations for Bellman-Isaacs equations
- Existence of asymptotic values for nonexpansive stochastic control systems
- General limit value in dynamic programming
- General limit value in zero-sum stochastic games
- Long-term values in Markov decision processes and repeated games, and a new distance for probability spaces
- On ergodic stochastic control
- On the existence of a limit value in some nonexpansive optimal control problems
- Tauberian theorem for games with unbounded running cost
- Uniform value in dynamic programming
Cited in
(10)- Existence of asymptotic values for nonexpansive stochastic control systems
- On representation formulas for long run averaging optimal control problem
- Tauberian theorem for value functions
- A note on general Tauberian-type results for controlled stochastic dynamics
- Representation formulas for limit values of long run stochastic optimal controls
- Asymptotics of values in dynamic games on large intervals
- Acyclic Gambling Games
- On the existence of a limit value in some nonexpansive optimal control problems
- The folk theorem for repeated games with time-dependent discounting
- On Tauberian theorem for stationary Nash equilibria
This page was built for publication: Limit value for optimal control with general means
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q887698)