A Survey of Applications of Markov Decision Processes

From MaRDI portal
Publication:4287645

DOI10.1057/jors.1993.181zbMath0798.90131OpenAlexW1978942630MaRDI QIDQ4287645

Douglas J. White

Publication date: 12 April 1994

Published in: Journal of the Operational Research Society (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1057/jors.1993.181




Related Items (23)

On undiscounted semi-Markov decision processes with absorbing statesAn online prediction algorithm for reinforcement learning with linear function approximation using cross entropy methodUnnamed ItemA perturbation approach to a class of discounted approximate value iteration algorithms with Borel spacesA dynamical approach to compatible and incompatible questionsMarkov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and OptimizationComputing semi-stationary optimal policies for multichain semi-Markov decision processesThe stochastic shortest path problem: a polyhedral combinatorics perspectiveSome remarks on cops and drunk robbersOptimal management of stochastic invasion in a metapopulation with Allee effectsHeuristic algorithm for nested Markov decision process: solution quality and computational complexityStochastic shortest path problems with associative accumulative criteriaMarkov decision processes under model uncertaintyOptimal threshold probability in undiscounted Markov decision processes with a target set.Application of reinforcement learning to the game of OthelloMarkov decision processesCooperative and non-cooperative behaviour in the exploitation of a common renewable resource with environmental stochasticityOptimal threshold probability and expectation in semi-Markov decision processesImproved bound on the worst case complexity of policy iterationAdvances in Bayesian decision making in reliabilityStochastic revision opportunities in Markov decision problemsUnnamed ItemNatural actor-critic algorithms




This page was built for publication: A Survey of Applications of Markov Decision Processes