Bayesian inverse reinforcement learning for collective animal movement

From MaRDI portal
Publication:2154196

DOI10.1214/21-AOAS1529zbMATH Open1498.62292arXiv2009.04003OpenAlexW3083921766MaRDI QIDQ2154196FDOQ2154196

Christopher K. Wikle, Toryn L. J. Schafer, Mevin B. Hooten

Publication date: 14 July 2022

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Abstract: Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the self propelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.


Full work available at URL: https://arxiv.org/abs/2009.04003




Recommendations




Cites Work






This page was built for publication: Bayesian inverse reinforcement learning for collective animal movement

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2154196)