Active inference and agency: optimal control without cost functions
From MaRDI portal
Publication:353847
DOI10.1007/s00422-012-0512-8zbMath1267.90167OpenAlexW2091405003WikidataQ39574261 ScholiaQ39574261MaRDI QIDQ353847
Spyridon Samothrakis, Read Montague, Karl J. Friston
Publication date: 16 July 2013
Published in: Biological Cybernetics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00422-012-0512-8
optimal controlfree energyinferenceactionBayesianagencypartially observable Markov decision processes
Related Items (14)
Active inference on discrete state-spaces: a synthesis ⋮ Optimal speech motor control and token-to-token variability: a Bayesian modeling approach ⋮ Robot navigation as hierarchical active inference ⋮ Reward Maximization Through Discrete Active Inference ⋮ Deep active inference as variational policy gradients ⋮ Bayesian optimal control for a non-autonomous stochastic discrete time system ⋮ The Discrete and Continuous Brain: From Decisions to Movement—And Back Again ⋮ Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering ⋮ Generalised free energy and active inference ⋮ A tutorial on variational Bayes for latent linear stochastic time-series models ⋮ A Minimum Free Energy Model of Motor Learning ⋮ Modeling the subjective perspective of consciousness and its role in the control of behaviours ⋮ Active Inference: Demystified and Compared ⋮ Predictive Processing in Cognitive Robotics: A Review
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Planning and acting in partially observable stochastic domains
- Action understanding and active inference
- Free energy, value, and attractors
- Dual-control theory. I
- Variational Bayesian identification and prediction of stochastic nonlinear dynamic causal models
- A complete class theorem for statistical problems with finite sample spaces
- Simple statistical gradient-following algorithms for connectionist reinforcement learning
- \({\mathcal Q}\)-learning
- Adaptive dual control. Theory and applications.
- Using Expectation-Maximization for Reinforcement Learning
- Probabilistic Inference and Influence Diagrams
- Proof of the Ergodic Theorem
- On the Theory of Dynamic Programming
- Stochastic Boolean satisfiability
This page was built for publication: Active inference and agency: optimal control without cost functions