Uniform Turnpike Theorems for Finite Markov Decision Processes
From MaRDI portal
Publication:5108234
DOI10.1287/moor.2017.0912zbMath1437.90158OpenAlexW2890373760WikidataQ129235508 ScholiaQ129235508MaRDI QIDQ5108234
Publication date: 30 April 2020
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/6807f3ca1607f79cd4f9085e931b5f8f6d293dbe
Discrete-time Markov processes on general state spaces (60J05) Dynamic programming (90C39) Markov and semi-Markov decision processes (90C40)
Related Items (5)
Turnpike in infinite dimension ⋮ Complexity bounds for approximately solving discounted MDPs by value iterations ⋮ Turnpikes in Finite Markov Decision Processes and Random Walk ⋮ Controlled Random Walk: Conjecture and Counter-Example ⋮ Time-varying Markov decision processes with state-action-dependent discount factors and unbounded costs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sensitivity analysis in discounted Markovian decision problems
- The value iteration algorithm is not strongly polynomial for discounted dynamic programming
- Solving H-horizon, stationary Markov decision problems in time proportional to log (H)
- The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate
- An overview of turnpike theory: towards the discounted deterministic case
- Stability of the Turnpike Phenomenon in Discrete-Time Optimal Control Problems
- Optimum Policy Regions for Markov Processes with Discounting
- Turnpike Planning Horizons for a Markovian Decision Model
This page was built for publication: Uniform Turnpike Theorems for Finite Markov Decision Processes