Iterative Aggregation-Disaggregation Procedures for Discounted Semi-Markov Reward Processes
From MaRDI portal
Publication:3688129
DOI10.1287/opre.33.3.589zbMath0571.90094OpenAlexW2160805324MaRDI QIDQ3688129
Kyle W. Kindle, Martin L. Puterman, Paul J. Schweitzer
Publication date: 1985
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/opre.33.3.589
aggregationdisaggregationcomputational experiencesuccessive approximationlarge state spacetotal discounted expected rewardfinite semi-Markov reward process
Related Items (10)
An iterative aggregation-disaggregation algorithm for solving linear equations ⋮ Discrete time controllable processes with bounded drift and their application in queueing systems ⋮ Reward revision and the average reward Markov decision process ⋮ Replacement process decomposition for discounted Markov renewal programming ⋮ Abstraction and approximate decision-theoretic planning. ⋮ Asymptotic Expansions for Stationary Distributions of Perturbed Semi-Markov Processes ⋮ Markov decision processes ⋮ Modified iterative aggregation procedure for maintenance optimisation of multi-component systems with failure interaction ⋮ Stochastic dynamic programming with factored representations ⋮ Estimating equilibrium probabilities for band diagonal Markov chains using aggregation and disaggregation techniques
This page was built for publication: Iterative Aggregation-Disaggregation Procedures for Discounted Semi-Markov Reward Processes