Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels
From MaRDI portal
Publication:2631344
Abstract: Monte Carlo algorithms often aim to draw from a distribution by simulating a Markov chain with transition kernel such that is invariant under . However, there are many situations for which it is impractical or impossible to draw from the transition kernel . For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace by an approximation . Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the chain given by the transition kernel is to the chain given by . We apply these results to several examples from spatial statistics and network analysis.
Recommendations
- Markov Chain Monte Carlo Algorithms: Theory and Practice
- On nonlinear Markov chain Monte Carlo
- Approximations of geometrically ergodic reversible Markov chains
- Using a Markov Chain to Construct a Tractable Approximation of an Intractable Probability Distribution
- Stability of noisy Metropolis-Hastings
Cites work
- scientific article; zbMATH DE number 3513115 (Why is no real title available?)
- scientific article; zbMATH DE number 1025906 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- A Stochastic Approximation Method
- A theory of the learnable
- An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants
- Approximate Bayesian computational methods
- Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms
- Efficient recursions for general factorisable models
- Exact sampling with coupled Markov chains and applications to statistical mechanics
- Exponential convergence of Langevin distributions and their discrete approximations
- Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms
- Langevin diffusions and Metropolis-Hastings algorithms
- Markov chains and stochastic stability
- Recursive computing and simulation-free inference for general factorizable models
- Regular Perturbation of V-Geometrically Ergodic Markov Chains
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Sensitivity and convergence of uniformly ergodic Markov chains
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Statistics for high-dimensional data. Methods, theory and applications.
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- The pseudo-marginal approach for efficient Monte Carlo computations
Cited in
(48)- An efficient adaptive MCMC algorithm for pseudo-Bayesian quantum tomography
- Stability of doubly-intractable distributions
- Model comparison for Gibbs random fields using noisy reversible jump Markov chain Monte Carlo
- Stability of noisy Metropolis-Hastings
- Bayesian inference, model selection and likelihood estimation using fast rejection sampling: the Conway-Maxwell-Poisson distribution
- Noisy Hamiltonian Monte Carlo for Doubly Intractable Distributions
- Semiparametric Bayesian analysis for longitudinal mixed effects models with non-normal AR(1) errors
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
- A Bayesian approach to disease clustering using restricted Chinese restaurant processes
- Estimating promotion effects in email marketing using a large-scale cross-classified Bayesian joint model for nested imbalanced data
- Perturbation theory for Markov chains via Wasserstein distance
- A Bayesian multilevel model for populations of networks using exponential-family random graphs
- Likelihood-free approximate Gibbs sampling
- Using a Markov Chain to Construct a Tractable Approximation of an Intractable Probability Distribution
- Subsampling MCMC -- an introduction for the survey statistician
- Perturbation bounds for Monte Carlo within metropolis via restricted approximations
- Scalable Bayes via barycenter in Wasserstein space
- Bayesian inference in the presence of intractable normalizing functions
- Multivariate Conway-Maxwell-Poisson Distribution: Sarmanov Method and Doubly Intractable Bayesian Inference
- Accelerating pseudo-marginal MCMC using Gaussian processes
- Bayesian model comparison with un-normalised likelihoods
- Distributed Bayesian Inference in Linear Mixed-Effects Models
- Approximations of geometrically ergodic reversible Markov chains
- scientific article; zbMATH DE number 7625191 (Why is no real title available?)
- An algorithm for distributed Bayesian inference
- Computationally efficient inference for latent position network models
- Efficient MCMC for Gibbs random fields using pre-computation
- The Monte Carlo computation error of transition probabilities
- A rare event approach to high-dimensional approximate Bayesian computation
- MEXIT: maximal un-coupling times for stochastic processes
- Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models
- Sequential Monte Carlo with transformations
- Bayesian model selection for high-dimensional Ising models, with applications to educational data
- Uncertainty quantification for Markov processes via variational principles and functional inequalities
- Informed sub-sampling MCMC: approximate Bayesian inference for large datasets
- On Russian roulette estimates for Bayesian inference with doubly-intractable likelihoods
- Exploiting multi-core architectures for reduced-variance estimation with intractable likelihoods
- Robustness of iterated function systems of Lipschitz maps
- Perturbation and Inverse Problems of Stochastic Matrices
- Bayesian computation: a summary of the current state, and samples backwards and forwards
- Sequential tests for large-scale learning
- On coupling particle filter trajectories
- Distributed computation for marginal likelihood based model choice
- Monte Carlo Markov chains constrained on graphs for a target with disconnected support
- A multilayer exponential random graph modelling approach for weighted networks
- A Function Emulation Approach for Doubly Intractable Distributions
- Markov Kernels Local Aggregation for Noise Vanishing Distribution Sampling
This page was built for publication: Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2631344)