Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes
From MaRDI portal
Publication:3100461
Abstract: We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.
Recommendations
- Accelerated modified policy iteration algorithms for Markov decision processes
- Accelerating Procedures of the Value Iteration Algorithm for Discounted Markov Decision Processes, Based on a One-Step Lookahead Analysis
- Multiply accelerated value iteration for nonsymmetric affine fixed point problems and application to Markov decision processes
- Accelerating the convergence of value iteration by using partial transition functions
- scientific article; zbMATH DE number 3916050
- An Accelerated Value/Policy Iteration Scheme for Optimal Control Problems and Games
- The convergence of value iteration in discounted Markov decision processes
Cited in
(10)- A note on generalized second-order value iteration in Markov decision processes
- Multiply accelerated value iteration for nonsymmetric affine fixed point problems and application to Markov decision processes
- Accelerating the convergence of value iteration by using partial transition functions
- Accelerated modified policy iteration algorithms for Markov decision processes
- Generic rank-one corrections for value iteration in Markovian decision problems
- A First-Order Approach to Accelerated Value Iteration
- Factored value iteration converges
- Prioritization methods for accelerating MDP solvers
- On iterative optimization ol structured Markov decision processes with discounted rewards
- Value set iteration for Markov decision processes
This page was built for publication: Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3100461)