Finding the \(K\) best policies in a finite-horizon Markov decision process
From MaRDI portal
Publication:2433472
DOI10.1016/j.ejor.2005.06.011zbMath1142.90495OpenAlexW2167322073MaRDI QIDQ2433472
Anders Ringgaard Kristensen, Lars Relund Nielsen
Publication date: 27 October 2006
Published in: European Journal of Operational Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ejor.2005.06.011
stochastic dynamic programmingdirected hypergraphshyperpaths\(K\) best policiesfinite-horizon Markov decision processes
Related Items
A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs ⋮ Optimizing pig marketing decisions under price fluctuations ⋮ An extended ϵ‐constraint method for a multiobjective finite‐horizon Markov decision process ⋮ Ranking policies in discrete Markov decision processes ⋮ Finite horizon semi-Markov decision processes with application to maintenance systems ⋮ Embedding a state space model into a Markov decision process ⋮ A matrix approach to hypergraph stable set and coloring problems with its application to storing problem ⋮ Finding hypernetworks in directed hypergraphs
Cites Work
- Hierarchic Markov processes and their applications in replacement models
- Adaptive control of constrained Markov chains: Criteria and policies
- A directed hypergraph model for random time dependent shortest paths
- Multi-level hierarchic Markov processes as a framework for herd management support
- Finding the \(K\) shortest hyperpaths
- Gainfree Leontief substitution flow problems
- Directed hypergraphs and applications
- Constrained Undiscounted Stochastic Dynamic Programming
- Minimal Representation of Directed Hypergraphs
- Survey of linear programming for standard and nonstandard Markovian control problems. Part I: Theory
- Bicriterion shortest hyperpaths in random time-dependent networks
- An Incremental Algorithm for a Generalization of the Shortest-Path Problem
- Some Remarks on Finite Horizon Markovian Decision Models
- Constrained Markov Decision Chains
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item