On terminating Markov decision processes with a risk-averse objective function
DOI10.1016/S0005-1098(01)00084-XzbMath0995.93075OpenAlexW2042459810WikidataQ126742945 ScholiaQ126742945MaRDI QIDQ5947647
Publication date: 17 October 2002
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0005-1098(01)00084-x
convergencedynamic programmingpolicy iterationrisk-sensitive finite states Markov decision processesstochastic shortest pathsterminating problemvalue iteration
Dynamic programming in optimal control and differential games (49L20) Optimal stochastic control (93E20) Markov and semi-Markov decision processes (90C40) Existence of optimal solutions to problems involving randomness (49J55)
Related Items (8)
Cites Work
- State-space formulae for all stabilizing controllers that satisfy an \(H_{\infty}\)-norm bound and relations to risk sensitivity
- Risk-sensitive and minimax control of discrete-time, finite-state Markov decision processes
- Risk sensitive control of Markov processes in countable state space
- Multiplicative Markov Decision Chains
- Discounted MDP’s: Distribution Functions and Exponential Utility Maximization
- Risk-sensitive linear/quadratic/gaussian control
- An Analysis of Stochastic Shortest Path Problems
- Stochastic Shortest Path Games
- The equivalence between infinite-horizon optimal control of stochastic systems with exponential-of-integral performance index and stochastic differential games
- Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic differential games
- Risk-Sensitive Markov Decision Processes
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: On terminating Markov decision processes with a risk-averse objective function