Simplified risk-aware decision making with belief-dependent rewards in partially observable domains
From MaRDI portal
Publication:2093380
DOI10.1016/J.ARTINT.2022.103775OpenAlexW4293879842WikidataQ114206160 ScholiaQ114206160MaRDI QIDQ2093380FDOQ2093380
Authors: Andrey Zhitnikov, Vadim Indelman
Publication date: 8 November 2022
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.artint.2022.103775
Cites Work
- DESPOT: online POMDP planning with regularization
- Title not available (Why is that?)
- Planning and acting in partially observable stochastic domains
- Title not available (Why is that?)
- Reinforcement learning. An introduction
- Point-based value iteration for continuous POMDPs
- Anytime point-based approximations for large POMDPS
Uses Software
This page was built for publication: Simplified risk-aware decision making with belief-dependent rewards in partially observable domains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2093380)