Optimal scaling for partially updating MCMC algorithms
From MaRDI portal
Publication:997939
DOI10.1214/105051605000000791zbMATH Open1127.60021arXivmath/0607054OpenAlexW2088198131MaRDI QIDQ997939FDOQ997939
Publication date: 8 August 2007
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Abstract: In this paper we shall consider optimal scaling problems for high-dimensional Metropolis--Hastings algorithms where updates can be chosen to be lower dimensional than the target density itself. We find that the optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well. Furthermore, the optimal efficiency obtainable is independent of the dimensionality of the update rule. This has important implications for the MCMC practitioner since high-dimensional updates are generally computationally more demanding, so that lower-dimensional updates are therefore to be preferred. Similar results with rather different conclusions are given for so-called Langevin updates. In this case, it is found that high-dimensional updates are frequently most efficient, even taking into account computing costs.
Full work available at URL: https://arxiv.org/abs/math/0607054
Recommendations
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Optimal scaling of MCMC beyond Metropolis
- Optimal scaling of the random walk Metropolis: general criteria for the 0.234 acceptance rule
Computational methods in Markov chains (60J22) Monte Carlo methods (65C05) Numerical analysis or methods applied to Markov chains (65C40) Central limit and other weak theorems (60F05)
Cites Work
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Title not available (Why is that?)
- Title not available (Why is that?)
- Langevin diffusions and Metropolis-Hastings algorithms
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- From Metropolis to diffusions: Gibbs states and optimal scaling.
Cited In (24)
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Optimal scaling of random walk Metropolis algorithms with discontinuous target densities
- Optimal scaling of random-walk Metropolis algorithms on general target distributions
- Optimal scaling of MCMC beyond Metropolis
- The random walk Metropolis: linking theory and practice through a case study
- Bayesian joint modeling of high-dimensional discrete multivariate longitudinal data using generalized linear mixed models
- Optimal scaling for random walk Metropolis on spherically constrained target densities
- Bayesian generalized linear low rank regression models for the detection of vaccine-adverse event associations
- Estimating and Projecting Trends in HIV/AIDS Generalized Epidemics Using Incremental Mixture Importance Sampling
- Hierarchical models: local proposal variances for RWM-within-Gibbs and MALA-within-Gibbs
- Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits
- Optimal scaling of random walk Metropolis algorithms with non-Gaussian proposals
- Solution of the inverse scattering problem from inhomogeneous media using affine invariant sampling
- Weak convergence and optimal tuning of the reversible jump algorithm
- Statistical analysis of an endemic disease from a capture–recapture experiment
- A Dirichlet form approach to MCMC optimal scaling
- Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
- Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
- Component-wise Markov chain Monte Carlo: uniform and geometric ergodicity under mixing and composition
- Hierarchical models and tuning of random walk Metropolis algorithms
- MALA-within-Gibbs Samplers for High-Dimensional Distributions with Sparse Conditional Structure
- Mixing of MCMC algorithms
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Bayesian Variable Selections for Probit Models with Componentwise Gibbs Samplers
This page was built for publication: Optimal scaling for partially updating MCMC algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q997939)