LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems
From MaRDI portal
(Redirected from Publication:831480)
Abstract: In this paper, we study asymptotic properties of problems of control of stochastic discrete time systems (also known as Markov decision processes) with time averaging and time discounting optimality criteria, and we establish that the Ces`aro and Abel limits of the optimal values in such problems can be evaluated with the help of a certain infinite-dimensional linear programming problem and its dual.
Recommendations
- Linear programming estimates for Cesàro and Abel limits of optimal values in optimal control problems
- LP-related representations of Cesàro and Abel limits of optimal value functions
- Limiting average cost control problems in a class of discrete-time stochastic systems
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time
Cites work
- scientific article; zbMATH DE number 3886197 (Why is no real title available?)
- scientific article; zbMATH DE number 3975284 (Why is no real title available?)
- scientific article; zbMATH DE number 4029251 (Why is no real title available?)
- scientific article; zbMATH DE number 49674 (Why is no real title available?)
- scientific article; zbMATH DE number 193548 (Why is no real title available?)
- scientific article; zbMATH DE number 1325008 (Why is no real title available?)
- scientific article; zbMATH DE number 1348599 (Why is no real title available?)
- scientific article; zbMATH DE number 1022519 (Why is no real title available?)
- scientific article; zbMATH DE number 1119444 (Why is no real title available?)
- scientific article; zbMATH DE number 1786123 (Why is no real title available?)
- scientific article; zbMATH DE number 3793773 (Why is no real title available?)
- scientific article; zbMATH DE number 3394474 (Why is no real title available?)
- A Uniform Tauberian Theorem in Dynamic Programming
- A review of duality theory for linear programming over topological vector spaces
- An $\varepsilon $-Optimal Control of a Finite Markov Chain with an Average Reward Criterion
- Analytic perturbation theory and its applications
- Asymptotic Controllability and Exponential Stabilization of Nonlinear Control Systems at Singular Points
- Average cost Markov decision processes with weakly continuous transition probabilities
- Averaging and linear programming in some singularly perturbed problems of optimal control
- Constrained Undiscounted Stochastic Dynamic Programming
- Constraint augmentation in pseudo-singularly perturbed linear programs
- Discounted continuous-time Markov decision processes with unbounded rates: the convex analytic approach
- Discounting versus averaging in dynamic programming
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Error bounds for rolling horizon policies in discrete-time Markov control processes
- Examples in Markov decision processes
- Infinite Linear Programming and Multichain Markov Control Processes in Uncountable Spaces
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- Linear Programming and Markov Decision Chains
- Linear programming and sequential decisions
- Linear programming formulation of MDPs in countable state space: The multichain case
- Linear programming formulation of long-run average optimal control problem
- Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time
- Markov decision processes with applications to finance.
- Multichain Markov Renewal Programs
- On Linear Programming in a Markov Decision Problem
- On optimality criteria for dynamic programs with long finite horizons
- On the relation between discounted and average optimal value functions
- Sample path average optimality of Markov control processes with strictly unbounded cost
- Singularly perturbed linear programs and Markov decision processes
- Some comments on a theorem of Hardy and Littlewood
- Stochastically recursive sequences and their generalizations
- Tauberian theorem for value functions
- Uniform value in dynamic programming
Cited in
(5)- Time‐average stochastic control based on a singular local Lévy model for environmental project planning under habit formation
- Examples concerning Abel and Cesàro limits
- Linear programming estimates for Cesàro and Abel limits of optimal values in optimal control problems
- On representation formulas for long run averaging optimal control problem
- Lack of equality between Abel and Cesàro limits in discrete optimal control and the implied duality gap
This page was built for publication: LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q831480)