Adaptive Gibbs samplers and related MCMC methods
From MaRDI portal
(Redirected from Publication:1948684)
Abstract: We consider various versions of adaptive Gibbs and Metropolis-within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run by learning as they go in an attempt to optimize the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.
Recommendations
- Adaptive Rejection Sampling for Gibbs Sampling
- An adaptive approach to Langevin MCMC
- On adaptive Metropolis-Hastings methods
- An Adaptive Independence Sampler MCMC Algorithm for Bayesian Inferences of Functions
- Adaptive Rejection Metropolis Sampling within Gibbs Sampling
- scientific article; zbMATH DE number 472921
Cites work
- scientific article; zbMATH DE number 597902 (Why is no real title available?)
- scientific article; zbMATH DE number 720679 (Why is no real title available?)
- scientific article; zbMATH DE number 2117879 (Why is no real title available?)
- A note on Markov chain Monte Carlo sweep strategies
- Adaptive Markov Chain Monte Carlo through Regeneration
- An adaptive Metropolis algorithm
- Componentwise adaptation for high dimensional MCMC
- Convergence of adaptive and interacting Markov chain Monte Carlo algorithms
- Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms
- General state space Markov chains and MCMC algorithms
- Geometric ergodicity and hybrid Markov chains
- Geometric ergodicity of Metropolis algorithms
- Gibbs sampling, exponential families and orthogonal polynomials
- Implementing random scan Gibbs samplers
- Learn from thy neighbor: parallel-chain and regional adaptive MCMC
- Limit theorems for some adaptive MCMC algorithms with subgeometric kernels
- Markov chains and stochastic stability
- Monte Carlo sampling methods using Markov chains and their applications
- Monte Carlo strategies in scientific computing
- On adaptive Markov chain Monte Carlo algorithms
- On the containment condition for adaptive Markov chain Monte Carlo algorithms
- On the ergodicity of the adaptive Metropolis algorithm on unbounded domains
- On the ergodicity properties of some adaptive MCMC algorithms
- On the geometric ergodicity of hybrid samplers
- On the stability and ergodicity of adaptive scaling Metropolis algorithms
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Optimal proposal distributions and adaptive MCMC
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimizing random scan Gibbs samplers
- Rates of convergence of the Hastings and Metropolis algorithms
- Stability of the Gibbs sampler for Bayesian hierarchical models
- Towards optimal scaling of Metropolis-coupled Markov chain Monte Carlo
- Two convergence properties of hybrid samplers
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
Cited in
(25)- A posteriori stochastic correction of reduced models in delayed-acceptance MCMC, with application to multiphase subsurface inverse problems
- Approximate blocked Gibbs sampling for Bayesian neural networks
- The symplectic geometry of closed equilateral random walks in 3-space
- Convergence of conditional Metropolis-Hastings samplers
- Convergence analysis of herded-Gibbs-type sampling algorithms: effects of weight sharing
- Markov Kernels Local Aggregation for Noise Vanishing Distribution Sampling
- A new adaptive approach of the Metropolis-Hastings algorithm applied to structural damage identification using time domain data
- Scalable Bayesian Inference for Coupled Hidden Markov and Semi-Markov Models
- A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems
- A hybrid scan Gibbs sampler for Bayesian models with latent variables
- Randomized reduced forward models for efficient Metropolis-Hastings MCMC, with application to subsurface fluid flow and capacitance tomography
- Bayesian computation: a summary of the current state, and samples backwards and forwards
- Behaviour of the Gibbs sampler when conditional distributions are potentially incompatible
- Stability of adversarial Markov chains, with an application to adaptive MCMC algorithms
- Convergence of adaptive direction sampling
- Nested adaptation of MCMC algorithms
- Implementing random scan Gibbs samplers
- Geometric ergodicity of random scan Gibbs samplers for hierarchical one-way random effects models
- The containment condition and AdapFail algorithms
- Optimizing random scan Gibbs samplers
- Component-wise Markov chain Monte Carlo: uniform and geometric ergodicity under mixing and composition
- Adaptive random neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian variable selection
- On the efficiency of adaptive MCMC algorithms
- A point mass proposal method for Bayesian state-space model fitting
- Sampling hyperparameters in hierarchical models: Improving on Gibbs for high-dimensional latent fields and large datasets
This page was built for publication: Adaptive Gibbs samplers and related MCMC methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1948684)