Optimal control as a graphical model inference problem
From MaRDI portal
Publication:420939
DOI10.1007/S10994-012-5278-7zbMATH Open1243.93133arXiv0901.0633OpenAlexW2107662876MaRDI QIDQ420939FDOQ420939
Authors: Hilbert J. Kappen, Vicenç Gómez, Manfred Opper
Publication date: 23 May 2012
Published in: Machine Learning (Search for Journal in Brave)
Abstract: We reformulate a class of non-linear stochastic optimal control problems introduced by Todorov (2007) as a Kullback-Leibler (KL) minimization problem. As a result, the optimal control computation reduces to an inference computation and approximate inference methods can be applied to efficiently compute approximate optimal controls. We show how this KL control theory contains the path integral control method as a special case. We provide an example of a block stacking task and a multi-agent cooperative game where we demonstrate how approximate inference can be successfully applied to instances that are too complex for exact computation. We discuss the relation of the KL control approach to other inference approaches to control.
Full work available at URL: https://arxiv.org/abs/0901.0633
Recommendations
- Graphical model inference in optimal control of stochastic multi-agent systems
- A Bayesian view on motor control and planning
- Adaptive importance sampling for control and inference
- An introduction to stochastic control theory, path integrals and reinforcement learning
- Stochastic optimal control of state constrained systems
Kullback-Leibler divergencegraphical modeloptimal controlbelief propagationapproximate inferencecluster variation methoduncontrolled dynamics
Cites Work
- LibDAI: a free and open source C++ library for discrete approximate inference in graphical models
- Title not available (Why is that?)
- Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms
- Title not available (Why is that?)
- Title not available (Why is that?)
- Efficient computation of optimal actions
- Study of the starting pressure gradient in branching network
- Using Expectation-Maximization for Reinforcement Learning
- Policy search for motor primitives in robotics
- Graphical model inference in optimal control of stochastic multi-agent systems
- Dynamic programming and influence diagrams
- Path integrals and symmetry breaking for optimal control theory
Cited In (38)
- Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space
- Design of biased random walks on a graph with application to collaborative recommendation
- Learning effective state-feedback controllers through efficient multilevel importance samplers
- Convergence of value functions for finite horizon Markov decision processes with constraints
- Data assimilation: the Schrödinger perspective
- Probabilistic control and majorisation of optimal control
- A reward-maximizing spiking neuron as a bounded rational decision maker
- A minimum free energy model of motor learning
- Adaptive importance sampling for control and inference
- Nonlinear discrete time optimal control based on fuzzy models
- Optimal speech motor control and token-to-token variability: a Bayesian modeling approach
- A KBRL inference metaheuristic with applications
- A cost/speed/reliability tradeoff to erasing
- EP for efficient stochastic control with obstacles
- Online control of simulated humanoids using particle belief propagation
- Nonparametric inference of stochastic differential equations based on the relative entropy rate
- Title not available (Why is that?)
- Kullback–Leibler-Quadratic Optimal Control
- Sparse randomized shortest paths routing with Tsallis divergence regularization
- Action selection in growing state spaces: control of network structure growth
- Planning and navigation as active inference
- Optimal design of priors constrained by external predictors
- Generalised free energy and active inference
- The free energy principle made simpler but not too simple
- A Bayesian view on motor control and planning
- Variational Inference for Stochastic Differential Equations
- On a probabilistic approach to synthesize control policies from example datasets
- Approximate constrained stochastic optimal control via parameterized input inference
- Reward Maximization Through Discrete Active Inference
- Variational approach to rare event simulation using least-squares regression
- Diffusion Schrödinger bridges for Bayesian computation
- Efficient computation of optimal actions
- A multilevel approach for stochastic nonlinear optimal control
- Bayesian optimal control for a non-autonomous stochastic discrete time system
- An estimator for the relative entropy rate of path measures for stochastic differential equations
- Graphical model inference in optimal control of stochastic multi-agent systems
- Systems of Bounded Rational Agents with Information-Theoretic Constraints
- An introduction to stochastic control theory, path integrals and reinforcement learning
Uses Software
This page was built for publication: Optimal control as a graphical model inference problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q420939)