Action-dependent stopping times and Markov decision process with unbounded rewards
From MaRDI portal
Publication:1158111
DOI10.1007/BF01783952zbMath0471.90094OpenAlexW2058997544MaRDI QIDQ1158111
J. A. E. E. Van Nunen, Shaler jun. Stidham
Publication date: 1981
Published in: OR Spektrum (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf01783952
algorithmupper boundslower boundsunbounded rewardsactions-dependent stopping timeelimination of non-optimal actionsequal-row- sum propertysemi Markov decision processessuccessive-approximation method
Stopping times; optimal stopping problems; gambling theory (60G40) Markov renewal processes, semi-Markov processes (60K15) Markov and semi-Markov decision processes (90C40)
Related Items
On theory and algorithms for Markov decision problems with the total reward criterion, Solving linear systems by methods based on a probabilistic interpretation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On theory and algorithms for Markov decision problems with the total reward criterion
- Markov programming by successive approximations with respect to weighted supremum norms
- Markov decision processes and strongly excessive functions
- Iterative solution of the functional equations of undiscounted Markov renewal programming
- Technical Note—An Equivalence Between Continuous and Discrete Time Markov Decision Processes
- Successive approximations for Markov decision processes and Markov games with unbounded rewards
- On Dynamic Programming with Unbounded Rewards
- Applying a New Device in the Optimization of Exponential Queuing Systems
- Bounds and Transformations for Discounted Finite Markov Decision Chains
- Note—A Test for Nonoptimal Actions in Undiscounted Finite Markov Decision Chains
- Discounting, Ergodicity and Convergence for Markov Decision Processes
- A set of successive approximation methods for discounted Markovian decision problems
- Note—A Note on Dynamic Programming with Unbounded Rewards