Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes

From MaRDI portal
Publication:3100461

DOI10.1287/OPRE.1090.0705zbMATH Open1226.90130arXivmath/0506489OpenAlexW2077559824MaRDI QIDQ3100461FDOQ3100461

Chi-Guhn Lee, Nasser Jaber, Dmitry Khmelev, Oleksandr Shlakhter

Publication date: 24 November 2011

Published in: Operations Research (Search for Journal in Brave)

Abstract: We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.


Full work available at URL: https://arxiv.org/abs/math/0506489






Cited In (3)


Recommendations





This page was built for publication: Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3100461)