Adaptive importance sampling for control and inference
From MaRDI portal
Abstract: Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feyman-Kac path integral and can be estimated using Monte Carlo sampling. In this contribution we review path integral control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the Path Integral Cross Entropy method or PICE. We illustrate this method for some simple examples. The path integral control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the path integral control method as an accurate alternative to particle filtering.
Recommendations
- Learning effective state-feedback controllers through efficient multilevel importance samplers
- Optimal control as a graphical model inference problem
- A generalized path integral control approach to reinforcement learning
- Stochastic optimal control of state constrained systems
- EP for efficient stochastic control with obstacles
Cites work
- scientific article; zbMATH DE number 5919872 (Why is no real title available?)
- scientific article; zbMATH DE number 54145 (Why is no real title available?)
- scientific article; zbMATH DE number 1222290 (Why is no real title available?)
- scientific article; zbMATH DE number 1321699 (Why is no real title available?)
- scientific article; zbMATH DE number 711262 (Why is no real title available?)
- A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data
- A generalized path integral control approach to reinforcement learning
- A sequential smoothing algorithm with linear computational cost
- A solution of the smoothing problem for linear dynamic systems
- A tutorial on the cross-entropy method
- An analysis of temporal-difference learning with function approximation
- Applications of the cross-entropy method to importance sampling and optimal control of diffusions
- Backward simulation methods for Monte Carlo statistical inference
- Differential dynamic programming and Newton's method for discrete optimal control problems
- Efficient computation of optimal actions
- Functional Approximations and Dynamic Programming
- Importance Sampling, Large Deviations, and Differential Games
- Learning Tetris Using the Noisy Cross-Entropy Method
- On Transforming a Certain Class of Stochastic Processes by Absolutely Continuous Substitution of Measures
- Optimal Control and Nonlinear Filtering for Nondegenerate Diffusion Processes
- Optimal control as a graphical model inference problem
- Reinforcement learning. An introduction
- Smoothing algorithms for state-space models
- Universal approximation bounds for superpositions of a sigmoidal function
- Variance Reduction for Simulated Diffusions
Cited in
(31)- Probabilistic control and majorisation of optimal control
- Adaptive sampling of large deviations
- An optimal control derivation of nonlinear smoothing equations
- Optimal control of probabilistic Boolean control networks: A scalable infinite horizon approach
- Optimal control as a graphical model inference problem
- Learning-based importance sampling via stochastic optimal control for stochastic reaction networks
- Nonparametric inference of stochastic differential equations based on the relative entropy rate
- scientific article; zbMATH DE number 7307475 (Why is no real title available?)
- A Bayesian view on motor control and planning
- Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space
- Convergence rates for optimised adaptive importance samplers
- Implicitly adaptive importance sampling
- A generalized path integral control approach to reinforcement learning
- An iterative Bayesian filtering framework for fast and automated calibration of DEM models
- Action selection in growing state spaces: control of network structure growth
- A multilevel approach for stochastic nonlinear optimal control
- Controlled interacting particle algorithms for simulation-based reinforcement learning
- Variational Inference for Stochastic Differential Equations
- Variational approach to rare event simulation using least-squares regression
- Data assimilation: the Schrödinger perspective
- Adaptive path-integral autoencoder: representation learning and planning for dynamical systems
- An estimator for the relative entropy rate of path measures for stochastic differential equations
- Diffusion Schrödinger bridges for Bayesian computation
- Computable Primal and Dual Bounds for Stochastic Control
- Learning effective state-feedback controllers through efficient multilevel importance samplers
- EP for efficient stochastic control with obstacles
- Iterative path integral approach to nonlinear stochastic optimal control under compound Poisson noise
- Design of biased random walks on a graph with application to collaborative recommendation
- scientific article; zbMATH DE number 5010659 (Why is no real title available?)
- Controlled sequential Monte Carlo
- Daisee: Adaptive importance sampling by balancing exploration and exploitation
This page was built for publication: Adaptive importance sampling for control and inference
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q290478)