Accelerated modified policy iteration algorithms for Markov decision processes

From MaRDI portal
Publication:2391867

DOI10.1007/S00186-013-0432-YzbMATH Open1273.90234arXiv0806.0320OpenAlexW2081102656WikidataQ115149137 ScholiaQ115149137MaRDI QIDQ2391867FDOQ2391867


Authors: Oleksandr Shlakhter, Chi-Guhn Lee Edit this on Wikidata


Publication date: 5 August 2013

Published in: Mathematical Methods of Operations Research (Search for Journal in Brave)

Abstract: One of the most widely used methods for solving average cost MDP problems is the value iteration method. This method, however, is often computationally impractical and restricted in size of solvable MDP problems. We propose acceleration operators that improve the performance of the value iteration for average reward MDP models. These operators are based on two important properties of Markovian operator: contraction mapping and monotonicity. It is well known that the classical relative value iteration methods for average cost criteria MDP do not involve the max-norm contraction or monotonicity property. To overcome this difficulty we propose to combine acceleration operators with variants of value iteration for stochastic shortest path problems associated average reward problems.


Full work available at URL: https://arxiv.org/abs/0806.0320




Recommendations




Cites Work


Cited In (11)





This page was built for publication: Accelerated modified policy iteration algorithms for Markov decision processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2391867)