Convergence of adaptive mixtures of importance sampling schemes
From MaRDI portal
(Redirected from Publication:997389)
Abstract: In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao--Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.
Recommendations
- Adaptive mixture importance sampling
- Convergence rates for optimised adaptive importance samplers
- Convergence and efficiency of adaptive importance sampling techniques with partial biasing
- Importance sampling schemes for evidence approximation in mixture models
- Exponential convergence of adaptive importance sampling for Markov chains
- Efficient importance sampling in mixture frameworks
- Safe adaptive importance sampling: a mixture approach
- Stochastic adaptation of importance sampler
Cites work
- scientific article; zbMATH DE number 1817585 (Why is no real title available?)
- scientific article; zbMATH DE number 4174133 (Why is no real title available?)
- scientific article; zbMATH DE number 3872359 (Why is no real title available?)
- scientific article; zbMATH DE number 2117879 (Why is no real title available?)
- Adaptive Markov Chain Monte Carlo through Regeneration
- Adaptive proposal distribution for random walk Metropolis algorithm
- An adaptive Metropolis algorithm
- Central limit theorem for sequential Monte Carlo methods and its application to Bayesian inference
- Inference in hidden Markov models.
- Intrinsic losses
- Iterated importance sampling in missing data problems
- Limit theorems for weighted samples with applications to sequential Monte Carlo methods
- Markov chains for exploring posterior distributions. (With discussion)
- Minimum variance importance samplingviaPopulation Monte Carlo
- Rates of convergence of the Hastings and Metropolis algorithms
- Recursive Monte Carlo filters: algorithms and theoretical analysis
- Self-regenerative Markov chain Monte Carlo with adaptation
- Sequential Monte Carlo Methods in Practice
- Sequential Monte Carlo Samplers
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Weighted Average Importance Sampling and Defensive Mixture Distributions
Cited in
(37)- Bayesian model averaging in astrophysics: a review
- Layered adaptive importance sampling
- On variance stabilisation in population Monte Carlo by double Rao-Blackwellisation
- Collective proposal distributions for nonlinear MCMC samplers: mean-field theory and fast implementation
- Multifidelity importance sampling
- Minimum variance importance samplingviaPopulation Monte Carlo
- A tutorial on approximate Bayesian computation
- Generalized multiple importance sampling
- Incremental Mixture Importance Sampling With Shotgun Optimization
- Sequential Monte Carlo with transformations
- Use in practice of importance sampling for repeated MCMC for Poisson models
- Iterative importance sampling algorithms for parameter estimation
- Convergence rates for optimised adaptive importance samplers
- Gradient-based adaptive importance samplers
- Likelihood free inference for Markov processes: a comparison
- Iterative Bayesian inversion with Gaussian mixtures: finite sample implementation and large sample asymptotics
- Adaptive multiple importance sampling
- Adaptive multiple importance sampling for Gaussian processes
- Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models
- Deep Importance Sampling Using Tensor Trains with Application to a Priori and a Posteriori Rare Events
- Target-aware Bayesian inference: how to beat optimal conventional estimators
- Safe adaptive importance sampling: a mixture approach
- Infinite-dimensional gradient-based descent for alpha-divergence minimisation
- Consistency of adaptive importance sampling and recycling schemes
- Convergence of Monte Carlo distribution estimates from rival samplers
- Method for approximating target distribution of importance sampling
- Parallelizing MCMC sampling via space partitioning
- Population Monte Carlo algorithm in high dimensions
- Efficient importance sampling in mixture frameworks
- Ensemble transport adaptive importance sampling
- Transport map accelerated adaptive importance sampling, and application to inverse problems arising from multiscale stochastic reaction networks
- Stochastic adaptation of importance sampler
- A survey of sequential Monte Carlo methods for economics and finance
- Approximate Bayesian computational methods
- Particle methods for statistical inference and design optimization
- On convergence of properly weighted samples to the target distribution
- Accelerating MCMC algorithms
This page was built for publication: Convergence of adaptive mixtures of importance sampling schemes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q997389)