A note on negative dynamic programming for risk-sensitive control
From MaRDI portal
Publication:957337
DOI10.1016/j.orl.2008.03.003zbMath1210.90171OpenAlexW2124480392MaRDI QIDQ957337
Publication date: 27 November 2008
Published in: Operations Research Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.orl.2008.03.003
Related Items (10)
Bio-inspired paradigms in network engineering games ⋮ A useful technique for piecewise deterministic Markov decision processes ⋮ Markov decision processes with iterated coherent risk measures ⋮ On risk-sensitive piecewise deterministic Markov decision processes ⋮ Continuous-Time Markov Decision Processes with Exponential Utility ⋮ Unnamed Item ⋮ First Passage Exponential Optimality Problem for Semi-Markov Decision Processes ⋮ A Variational Formula for Risk-Sensitive Reward ⋮ Finite horizon risk-sensitive continuous-time Markov decision processes with unbounded transition and cost rates ⋮ On gradual-impulse control of continuous-time Markov decision processes with exponential utility
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly optimal policies in risk-sensitive positive dynamic programming on discrete spaces.
- Markov decision processes with a new optimality criterion: Discrete time
- Measurable selections of extrema
- Discounted MDP’s: Distribution Functions and Exponential Utility Maximization
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- A Utility Criterion for Markov Decision Processes
- Optimal stationary policies inrisk-sensitive dynamic programs with finite state spaceand nonnegative rewards
- Risk Aversion in the Small and in the Large
- Discounted Dynamic Programming
- Negative Dynamic Programming
This page was built for publication: A note on negative dynamic programming for risk-sensitive control