A decentralized partially observable Markov decision model with action duration for goal recognition in real time strategy games (Q2403912)

From MaRDI portal
Revision as of 05:28, 5 April 2024 by Daniel (talk | contribs) (‎Created claim: Wikidata QID (P12): Q59143180, #quickstatements; #temporary_batch_1712286835472)
scientific article
Language Label Description Also known as
English
A decentralized partially observable Markov decision model with action duration for goal recognition in real time strategy games
scientific article

    Statements

    A decentralized partially observable Markov decision model with action duration for goal recognition in real time strategy games (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    12 September 2017
    0 references
    Summary: Multiagent goal recognition is a tough yet important problem in many real time strategy games or simulation systems. Traditional modeling methods either are in great demand of detailed agents' domain knowledge and training dataset for policy estimation or lack clear definition of action duration. To solve the above problems, we propose a novel Dec-POMDM-T model, combining the classic Dec-POMDP, an observation model for recognizer, joint goal with its termination indicator, and time duration variables for actions with action termination variables. In this paper, a model-free algorithm named cooperative colearning based on Sarsa is used. Considering that Dec-POMDM-T usually encounters multiagent goal recognition problems with different sorts of noises, partially missing data, and unknown action durations, the paper exploits the SIS PF with resampling for inference under the dynamic Bayesian network structure of Dec-POMDM-T. In experiments, a modified predator-prey scenario is adopted to study multiagent joint goal recognition problem, which is the recognition of the joint target shared among cooperative predators. Experiment results show that (a) Dec-POMDM-T works effectively in multiagent goal recognition and adapts well to dynamic changing goals within agent group; (b) Dec-POMDM-T outperforms traditional Dec-MDP-based methods in terms of precision, recall, and \(F\)-measure.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references