LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems
From MaRDI portal
Publication:831480
DOI10.1016/j.jmaa.2022.126121zbMath1489.90207arXiv2010.15375OpenAlexW4221087484MaRDI QIDQ831480
Lucas Gamertsfelder, Vladimir Gaitsgory, Konstantin E. Avrachenkov
Publication date: 23 March 2022
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2010.15375
linear programmingdynamic programmingstochastic optimal controloptimality conditionsdiscrete timeMarkov decision processes (MDPs)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Uniform value in dynamic programming
- Markov decision processes with applications to finance.
- A review of duality theory for linear programming over topological vector spaces
- On optimality criteria for dynamic programs with long finite horizons
- On the relation between discounted and average optimal value functions
- Discounting versus averaging in dynamic programming
- Some comments on a theorem of Hardy and Littlewood
- Tauberian theorem for value functions
- Singularly perturbed linear programs and Markov decision processes
- Stochastically recursive sequences and their generalizations
- Averaging and linear programming in some singularly perturbed problems of optimal control
- Linear programming formulations of deterministic infinite horizon optimal control problems in discrete time
- Linear programming formulation of long-run average optimal control problem
- Constraint augmentation in pseudo-singularly perturbed linear programs
- Linear Programming and Sequential Decisions
- Analytic Perturbation Theory and Its Applications
- Average Cost Markov Decision Processes with Weakly Continuous Transition Probabilities
- Examples in Markov Decision Processes
- Discounted Continuous-Time Markov Decision Processes with Unbounded Rates: The Convex Analytic Approach
- Constrained Undiscounted Stochastic Dynamic Programming
- Linear Programming and Markov Decision Chains
- An $\varepsilon $-Optimal Control of a Finite Markov Chain with an Average Reward Criterion
- A Uniform Tauberian Theorem in Dynamic Programming
- Asymptotic Controllability and Exponential Stabilization of Nonlinear Control Systems at Singular Points
- Linear programming formulation of MDPs in countable state space: The multichain case
- Infinite Linear Programming and Multichain Markov Control Processes in Uncountable Spaces
- Sample path average optimality of Markov control processes with strictly unbounded cost
- Discrete-Time Controlled Markov Processes with Average Cost Criterion: A Survey
- Error bounds for rolling horizon policies in discrete-time Markov control processes
- LP Formulations of Discrete Time Long-Run Average Optimal Control Problems: The NonErgodic Case
- On Linear Programming in a Markov Decision Problem
- Multichain Markov Renewal Programs
This page was built for publication: LP based upper and lower bounds for Cesàro and Abel limits of the optimal values in problems of control of stochastic discrete time systems