Future memories are not needed for large classes of POMDPs
From MaRDI portal
Publication:6106534
DOI10.1016/j.orl.2023.02.011zbMath1525.90424arXiv2205.02580MaRDI QIDQ6106534
Publication date: 3 July 2023
Published in: Operations Research Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2205.02580
Related Items (1)
Cites Work
- Finding optimal memoryless policies of POMDPs under the expected average reward criterion
- Dynamic programming and suboptimal control: a survey from ADP to MPC
- Decomposable Markov Decision Processes: A Fluid Optimization Approach
- Julia: A Fresh Approach to Numerical Computing
- The Complexity of Markov Decision Processes
- State of the Art—A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms
- Computability of global solutions to factorable nonconvex programs: Part I — Convex underestimating problems
- The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs
- The Optimal Control of Partially Observable Markov Processes over a Finite Horizon
- JuMP: A Modeling Language for Mathematical Optimization
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Future memories are not needed for large classes of POMDPs