Modeling and Planning with Macro-Actions in Decentralized POMDPs
DOI10.1613/JAIR.1.11418zbMATH Open1489.68314DBLPjournals/jair/AmatoKKH19OpenAlexW2924816077WikidataQ90962033 ScholiaQ90962033MaRDI QIDQ5376629FDOQ5376629
Authors: Christopher Amato, George D. Konidaris, Leslie Pack Kaelbling, Jonathan P. How
Publication date: 17 May 2019
Published in: Journal of Artificial Intelligence Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1613/jair.1.11418
Recommendations
- Efficient planning under uncertainty with macro-actions
- A concise introduction to decentralized POMDPs
- Planning and acting in partially observable stochastic domains
- An investigation into mathematical programming for finite horizon decentralized POMDPS
- A decentralized partially observable decision model for recognizing the multiagent goal in simulation systems
- scientific article; zbMATH DE number 721855
- The role of macros in tractable planning
- A decentralized partially observable Markov decision model with action duration for goal recognition in real time strategy games
- scientific article; zbMATH DE number 5547912
Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Artificial intelligence for robotics (68T40) Agent technology and artificial intelligence (68T42) Markov and semi-Markov decision processes (90C40)
Cited In (14)
- Decentralized Markov decision processes for handling temporal and resource constraints in a multiple robot system
- Multiagent expedition with graphical models
- An investigation into mathematical programming for finite horizon decentralized POMDPS
- Optimal and approximate Q-value functions for decentralized POMDPS
- A concise introduction to decentralized POMDPs
- A sufficient statistic for influence in structured multiagent environments
- What to communicate? Execution-time decision in multi-agent POMDPs
- Probabilistic inference techniques for scalable multiagent decision making
- Optimally solving Dec-POMDPs as continuous-state MDPs
- A decentralized partially observable decision model for recognizing the multiagent goal in simulation systems
- A decentralized partially observable Markov decision model with action duration for goal recognition in real time strategy games
- Decentralized MDPs with sparse interactions
- Efficient planning under uncertainty with macro-actions
- Incremental clustering and expansion for faster optimal planning in decentralized POMDPs
This page was built for publication: Modeling and Planning with Macro-Actions in Decentralized POMDPs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5376629)