General limit value in dynamic programming
From MaRDI portal
Abstract: We consider a dynamic programming problem with arbitrary state space and bounded rewards. Is it possible to define in an unique way a limit value for the problem, where the "patience" of the decision-maker tends to infinity ? We consider, for each evaluation (a probability distribution over positive integers) the value function of the problem where the weight of any stage is given by , and we investigate the uniform convergence of a sequence when the "impatience" of the evaluations vanishes, in the sense that . We prove that this uniform convergence happens if and only if the metric space is totally bounded. Moreover there exists a particular function , independent of the particular chosen sequence , such that any limit point of such sequence of value functions is precisely . Consequently, while speaking of uniform convergence of the value functions, may be considered as the unique possible limit when the patience of the decision-maker tends to infinity. The result applies in particular to discounted payoffs when the discount factor vanishes, as well as to average payoffs where the number of stages goes to infinity, and also to models with stochastic transitions. We present tractable corollaries, and we discuss counterexamples and a conjecture.
Recommendations
- scientific article; zbMATH DE number 3923537
- Generalization of the dynamic programming scheme
- Approximation limitations of pure dynamic programming
- Generalized dynamic programming. General points
- On Bounds for Dynamic Programs
- Asymptotic properties in dynamic programming
- \(\varepsilon\)-value function and dynamic programming
- Uniform value in dynamic programming
- scientific article; zbMATH DE number 3863956
- On dynamic programming with unbounded returns
Cites work
- scientific article; zbMATH DE number 893883 (Why is no real title available?)
- A Uniform Tauberian Theorem in Dynamic Programming
- Asymptotic properties in dynamic programming
- Discounting versus averaging in dynamic programming
- Discrete Dynamic Programming
- Letter to the Editor—Criterion Equivalence in Discrete Dynamic Programming
- Stochastic games
- Uniform value in dynamic programming
Cited in
(15)- Uniform value in dynamic programming
- Asymptotic properties of optimal trajectories in dynamic programming
- Bounded variation of \(\{V_ n\}\) and its limit
- On Tauberian theorem for stationary Nash equilibria
- Tauberian theorem for value functions
- Long-term values in Markov decision processes and repeated games, and a new distance for probability spaces
- Tauberian theorems for general iterations of operators: applications to zero-sum stochastic games
- Limit value for optimal control with general means
- Functional equations in the theory of dynamic programming. XI: Limit theorems
- The folk theorem for repeated games with time-dependent discounting
- General Discounting Versus Average Reward
- On representation formulas for long run averaging optimal control problem
- Abel-type results for controlled piecewise deterministic Markov processes
- A Uniform Tauberian Theorem in Dynamic Programming
- Asymptotics of values in dynamic games on large intervals
This page was built for publication: General limit value in dynamic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q482548)