Convergence rates for optimised adaptive importance samplers
From MaRDI portal
Publication:2029096
DOI10.1007/S11222-020-09983-1zbMath1461.62010arXiv1903.12044OpenAlexW3125726451MaRDI QIDQ2029096
Ömer Deniz Akyildiz, Joaquín Míguez
Publication date: 3 June 2021
Published in: Statistics and Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.12044
Related Items (8)
Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space ⋮ Daisee: Adaptive importance sampling by balancing exploration and exploitation ⋮ Gradient-based adaptive importance samplers ⋮ Efficient Bayes inference in neural networks through adaptive importance sampling ⋮ Context-Aware Surrogate Modeling for Balancing Approximation and Sampling Costs in Multifidelity Importance Sampling and Bayesian Inverse Problems ⋮ Variance analysis of multiple importance sampling schemes ⋮ Implicitly adaptive importance sampling ⋮ MCMC-driven importance samplers
Cites Work
- Unnamed Item
- Unnamed Item
- Adaptive importance sampling for control and inference
- Particle-kernel estimation of the filter density in state-space models
- Minimizing finite sums with the stochastic average gradient
- Adaptive Monte Carlo variance reduction for Lévy processes with two-time-scale stochastic approximation
- Convergence of adaptive mixtures of importance sampling schemes
- Introductory lectures on convex optimization. A basic course.
- The sample size required in importance sampling
- Asymptotic bias of stochastic gradient search
- Importance sampling: intrinsic dimension and computational cost
- A framework for adaptive Monte Carlo procedures
- Importance Sampling and Necessary Sample Size: An Information Theory Approach
- Graphical Models, Exponential Families, and Variational Inference
- Optimizing Adaptive Importance Sampling by Stochastic Approximation
- Optimization Methods for Large-Scale Machine Learning
- Adaptative Monte Carlo Method, A Variance Reduction Technique
- Acceleration on Adaptive Importance Sampling with Sample Average Approximation
- A Stochastic Approximation Method
This page was built for publication: Convergence rates for optimised adaptive importance samplers