POMDP controllers with optimal budget
DOI10.1007/978-3-031-16336-4_6zbMATH Open1522.68346MaRDI QIDQ6160772FDOQ6160772
Authors: Jip Spel, Svenja Stein, Joost-Pieter Katoen
Publication date: 2 June 2023
Published in: Quantitative Evaluation of Systems (Search for Journal in Brave)
Recommendations
- Optimal cost almost-sure reachability in POMDPs
- On Near Optimality of the Set of Finite-State Controllers for Average Cost POMDP
- On the Computational Complexity of Stochastic Controller Optimization in POMDPs
- Control Theory Meets POMDPs: A Hybrid Systems Approach
- Efficient Algorithms for Budget-Constrained Markov Decision Processes
- Policy iteration for bounded-parameter POMDPs
- Efficient, optimal stochastic-action selection when limited by an action budget
- Partially observable stochastic optimal control
- scientific article; zbMATH DE number 18469
- Optimal control with budget constraints and resets
Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87) Specification and verification (program logics, model checking, etc.) (68Q60) Markov and semi-Markov decision processes (90C40)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Accelerated model checking of parametric Markov chains
- Are parametric Markov chains monotonic?
- Computationally Feasible Bounds for Partially Observed Markov Decision Processes
- Finding provably optimal Markov chains
- Parameter synthesis for Markov models: faster than ever
- Parametric Markov chains: PCTL complexity and fraction-free Gaussian elimination
- Parametric Markov chains: PCTL complexity and fraction-free Gaussian elimination
- Parametric probabilistic transition systems for system design and analysis
- Perturbation analysis in verification of discrete-time Markov chains
- Planning and acting in partially observable stochastic domains
- Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
- The complexity of reachability in parametric Markov decision processes
- Theoretical Aspects of Computing - ICTAC 2004
Cited In (3)
This page was built for publication: POMDP controllers with optimal budget
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6160772)