On the total reward variance for continuous-time Markov reward chains
From MaRDI portal
Publication:5441521
DOI10.1239/jap/1165505206zbMath1169.90479OpenAlexW2069131959MaRDI QIDQ5441521
Nico M. van Dijk, Karel Sladký
Publication date: 15 February 2008
Published in: Journal of Applied Probability (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1239/jap/1165505206
Minimax problems in mathematical programming (90C47) Markov and semi-Markov decision processes (90C40) Continuous-time Markov processes on discrete state spaces (60J27)
Related Items (5)
Optimizing a production-inventory system under a cost target ⋮ Variance minimization for constrained discounted continuous-time MDPs with exponentially distributed stopping times ⋮ Mean-variance problems for finite horizon semi-Markov decision processes ⋮ On minimizing downside risk in <scp>make‐to‐stock</scp>, <scp>risk‐averse</scp> firms ⋮ An Inequality for Variances of the Discounted Rewards
Cites Work
- Maximal mean/standard deviation ratio in an undiscounted MDP
- Markov decision processes with a minimum-variance criterion
- A variance minimization problem for a Markov decision process
- Markov decision processes with a new optimality criterion: Continuous time
- Mean, variance and probabilistic criteria in finite Markov decision processes: A review
- Markov decision processes with a new optimality criterion: Discrete time
- Calculating the variance in Markov-processes with random reward
- Variance-Penalized Markov Decision Processes
- A Utility Criterion for Markov Decision Processes
- On Finding Optimal Policies for Markov Decision Chains: A Unifying Framework for Mean-Variance-Tradeoffs
- The variance of discounted Markov decision processes
- On Finding the Maximal Gain for Markov Decision Processes
- Markov Decision Processes with a New Optimality Criterion: Small Interest Rates
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: On the total reward variance for continuous-time Markov reward chains