Pure Stationary Optimal Strategies in Markov Decision Processes
From MaRDI portal
Publication:3590933
DOI10.1007/978-3-540-70918-3_18zbMATH Open1186.93043OpenAlexW1562671620MaRDI QIDQ3590933FDOQ3590933
Authors: Hugo Gimbert
Publication date: 3 September 2007
Published in: STACS 2007 (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-540-70918-3_18
Recommendations
- Optimal stationary strategies in leavable Markov decision processes
- Optimal stationary policies in the vector-valued Markov decision process
- scientific article; zbMATH DE number 3847226
- Optimal Markov strategies
- Optimal Stationary Policies in General State Space Markov Decision Chains with Finite Action Sets
- Stationary \(\varepsilon\)-optimal strategies in stochastic games
- Optimal continuous time Markov decisions
- scientific article; zbMATH DE number 3936962
- On Stationary Strategies in Countable State Total Reward Markov Decision Processes
Markov and semi-Markov decision processes (90C40) Stochastic games, stochastic differential games (91A15) Discrete event control/observation systems (93C65)
Cited In (13)
- Continuous Positional Payoffs
- Strong 1-optimal stationary policies in denumerable Markov decision processes
- Synthesizing efficient systems in probabilistic environments
- Submixing and shift-invariant stochastic games
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimal Markov strategies
- Optimal Stationary Policies in General State Space Markov Decision Chains with Finite Action Sets
- Simplifying optimal strategies in \(\limsup\) and \(\liminf\) stochastic games
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning-Based Mean-Payoff Optimization in an Unknown MDP under Omega-Regular Constraints
This page was built for publication: Pure Stationary Optimal Strategies in Markov Decision Processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3590933)