Bayesian inverse reinforcement learning for collective animal movement
From MaRDI portal
Publication:2154196
Abstract: Agent-based methods allow for defining simple rules that generate complex group behaviors. The governing rules of such models are typically set a priori and parameters are tuned from observed behavior trajectories. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long term behavior policies by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process to learn the local rules governing collective movement for a simulation of the self propelled-particle (SPP) model and a data application for a captive guppy population. The estimation of the behavioral decision costs is done in a Bayesian framework with basis function smoothing. We recover the true costs in the SPP simulation and find the guppies value collective movement more than targeted movement toward shelter.
Recommendations
- Hierarchical nonlinear spatio-temporal agent-based models for collective animal movement
- Nonparametric inference of interaction laws in systems of agents from trajectory data
- Inverse Bayesian inference in swarming behaviour of soldier crabs
- Modular inverse reinforcement learning for visuomotor behavior
- Inverse reinforcement learning from summary data
Cites work
- scientific article; zbMATH DE number 3126094 (Why is no real title available?)
- A survey of inverse reinforcement learning: challenges, methods and progress
- Agent-based inference for animal movement and selection
- Continuous-time discrete-space models for animal movement
- Dynamic models of animal movement with spatial point process interactions
- Dynamic social networks based on movement
- Efficient computation of optimal actions
- Hierarchical nonlinear spatio-temporal agent-based models for collective animal movement
- Inverse reinforcement learning from summary data
- Statistical Implementations of Agent‐Based Demographic Models
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
Cited in
(4)- Inverse Bayesian inference in swarming behaviour of soldier crabs
- Hierarchical nonlinear spatio-temporal agent-based models for collective animal movement
- Behavioral spherical harmonics for long-range agents' interaction
- Analysis and classification of collective behavior using generative modeling and nonlinear manifold learning
This page was built for publication: Bayesian inverse reinforcement learning for collective animal movement
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2154196)