Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization
Publication:2937736
DOI10.1007/978-3-662-45489-3_6zbMath1426.68190OpenAlexW90968195MaRDI QIDQ2937736
Markus Siegle, Alexander Gouberman
Publication date: 12 January 2015
Published in: Stochastic Model Checking. Rigorous Dependability Analysis Using Model Checking Techniques for Stochastic Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-662-45489-3_6
Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Models and methods for concurrent and distributed computing (process algebras, bisimulation, transition nets, etc.) (68Q85) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Markov decision processes with applications to finance.
- A survey of Markov decision models for control of networks of queues
- Markov decision processes with their applications
- NP-hardness of checking the unichain condition in average cost MDPs
- Planning with Markov Decision Processes: An AI Perspective
- Decentralized Markov Decision Processes for Handling Temporal and Resource constraints in a Multiple Robot System
- Semi-Markov Risk Models for Finance, Insurance and Reliability
- Learning Representation and Control in Markov Decision Processes: New Frontiers
- Poisson Arrivals See Time Averages
- An Analysis of Stochastic Shortest Path Problems
- A Survey of Applications of Markov Decision Processes
- State-space support for path-based reward variables
- Approximate Dynamic Programming
- SERIES EXPANSIONS FOR FINITE-STATE MARKOV CHAINS
- Queueing Networks and Markov Chains
- Scientific Applications: An algorithm for identifying the ergodic subchains and transient states of a stochastic matrix
- Applied Semi-Markov Processes
This page was built for publication: Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization