Finite-State Approximations to Discounted and Average Cost Constrained Markov Decision Processes

From MaRDI portal
Publication:5223783

DOI10.1109/TAC.2018.2890756zbMATH Open1482.93710arXiv1807.02994OpenAlexW2879935397WikidataQ128688624 ScholiaQ128688624MaRDI QIDQ5223783FDOQ5223783


Authors: Naci Saldi Edit this on Wikidata


Publication date: 18 July 2019

Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)

Abstract: In this paper, we consider the finite-state approximation of a discrete-time constrained Markov decision process (MDP) under the discounted and average cost criteria. Using the linear programming formulation of the constrained discounted cost problem, we prove the asymptotic convergence of the optimal value of the finite-state model to the optimal value of the original model. With further continuity condition on the transition probability, we also establish a method to compute approximately optimal policies. For the average cost, instead of using the finite-state linear programming approximation method, we use the original problem definition to establish the finite-state asymptotic approximation of the constrained problem and compute approximately optimal policies. Under Lipschitz type regularity conditions on the components of the MDP, we also obtain explicit rate of convergence bounds quantifying how the approximation improves as the size of the approximating finite state space increases.


Full work available at URL: https://arxiv.org/abs/1807.02994







Cited In (7)





This page was built for publication: Finite-State Approximations to Discounted and Average Cost Constrained Markov Decision Processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223783)