Dynamic Programming Deconstructed: Transformations of the Bellman Equation and Computational Efficiency
From MaRDI portal
Publication:5031647
DOI10.1287/opre.2020.2006zbMath1485.90145arXiv1811.01940OpenAlexW3126305116MaRDI QIDQ5031647
Publication date: 16 February 2022
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.01940
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk-averse dynamic programming for Markov decision processes
- Convex dynamic programming with (bounded) recursive utility
- Stochastic optimal growth model with risk sensitive preferences
- Unique solutions for stochastic recursive utilities
- Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
- Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher
- Optimal Stopping in a Partially Observable Markov Process with Costly Information
- Action Elimination Procedures for Modified Policy Iteration Algorithms
- Approximate Dynamic Programming
- Robust Dynamic Programming
- Robustness
This page was built for publication: Dynamic Programming Deconstructed: Transformations of the Bellman Equation and Computational Efficiency