Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance
From MaRDI portal
Publication:2196543
DOI10.1016/J.SPA.2020.05.006zbMATH Open1455.60099arXiv1706.09873OpenAlexW2796367494WikidataQ109744819 ScholiaQ109744819MaRDI QIDQ2196543FDOQ2196543
Publication date: 3 September 2020
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Abstract: We establish an ordering criterion for the asymptotic variances of two consistent Markov chain Monte Carlo (MCMC) estimators: an importance sampling (IS) estimator, based on an approximate reversible chain and subsequent IS weighting, and a standard MCMC estimator, based on an exact reversible chain. Essentially, we relax the criterion of the Peskun type covariance ordering by considering two different invariant probabilities, and obtain, in place of a strict ordering of asymptotic variances, a bound of the asymptotic variance of IS by that of the direct MCMC. Simple examples show that IS can have arbitrarily better or worse asymptotic variance than Metropolis-Hastings and delayed-acceptance (DA) MCMC. Our ordering implies that IS is guaranteed to be competitive up to a factor depending on the supremum of the (marginal) IS weight. We elaborate upon the criterion in case of unbiased estimators as part of an auxiliary variable framework. We show how the criterion implies asymptotic variance guarantees for IS in terms of pseudo-marginal (PM) and DA corrections, essentially if the ratio of exact and approximate likelihoods is bounded. We also show that convergence of the IS chain can be less affected by unbounded high-variance unbiased estimators than PM and DA chains.
Full work available at URL: https://arxiv.org/abs/1706.09873
Markov chain Monte Carloimportance samplingunbiased estimatorasymptotic variancepseudo-marginal algorithmdelayed-acceptance
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Markov Chains and Stochastic Stability
- Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes
- The pseudo-marginal approach for efficient Monte Carlo computations
- Particle Markov Chain Monte Carlo Methods
- Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator
- Monte Carlo sampling methods using Markov chains and their applications
- SMC2: An Efficient Algorithm for Sequential Analysis of State Space Models
- Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains
- A note on Metropolis-Hastings kernels for general state spaces
- Geometric ergodicity of Metropolis algorithms
- Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms
- Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms
- On the ergodicity properties of some adaptive MCMC algorithms
- Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions
- Delayed acceptance particle MCMC for exact inference in stochastic kinetic models
- Renewal theory and computable convergence rates for geometrically erdgodic Markov chains
- Safe and Effective Importance Sampling
- Importance Sampling in Stochastic Programming: A Markov Chain Monte Carlo Approach
- Establishing some order amongst exact approximations of MCMCs
- Optimum Monte-Carlo sampling using Markov chains
- Importance Sampling for Stochastic Simulations
- Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms
- Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers
- Markov Chains
- Explicit error bounds for Markov chain Monte Carlo
- Examples comparing importance sampling and the Metropolis algorithm
- Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction
- A vanilla Rao-Blackwellization of Metropolis-Hastings algorithms
- Nonlocal Monte Carlo algorithm for self-avoiding walks with fixed endpoints.
- Pseudo-marginal Metropolis–Hastings sampling using averages of unbiased estimators
- Randomize-Then-Optimize: A Method for Sampling from Posterior Distributions in Nonlinear Inverse Problems
- Speeding up MCMC by Delayed Acceptance and Data Subsampling
- On random- and systematic-scan samplers
Cited In (6)
- Efficiency of delayed-acceptance random walk metropolis algorithms
- Ensemble MCMC: accelerating pseudo-marginal MCMC for state space models using the ensemble Kalman filter
- Variance bounding of delayed-acceptance kernels
- Unbiased Inference for Discretely Observed Hidden Markov Model Diffusions
- Conditional particle filters with diffuse initial distributions
- Sampling algorithms in statistical physics: a guide for statistics and machine learning
This page was built for publication: Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2196543)