Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
From MaRDI portal
Publication:2467602
DOI10.1214/105051607000000096zbMATH Open1144.60016arXiv0710.3684OpenAlexW3100216688MaRDI QIDQ2467602FDOQ2467602
Authors: M. Bédard
Publication date: 28 January 2008
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Abstract: In this paper, we shall optimize the efficiency of Metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method for determining the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the well-known 0.234.
Full work available at URL: https://arxiv.org/abs/0710.3684
Recommendations
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
Numerical analysis or methods applied to Markov chains (65C40) Central limit and other weak theorems (60F05)
Cites Work
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Bayesian computation and stochastic systems. With comments and reply.
- Monte Carlo sampling methods using Markov chains and their applications
- Title not available (Why is that?)
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- General state space Markov chains and MCMC algorithms
- Title not available (Why is that?)
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- From Metropolis to diffusions: Gibbs states and optimal scaling.
- Optimal scaling of MaLa for nonlinear regression.
- Title not available (Why is that?)
- Optimal scaling for partially updating MCMC algorithms
Cited In (42)
- Scaling analysis of delayed rejection MCMC methods
- Optimal tuning of the hybrid Monte Carlo algorithm
- Error bounds and normalising constants for sequential Monte Carlo samplers in high dimensions
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Efficiency of delayed-acceptance random walk metropolis algorithms
- Asymptotic analysis of the random walk metropolis algorithm on ridged densities
- On the stability of sequential Monte Carlo methods in high dimensions
- Randomized Hamiltonian Monte Carlo as scaling limit of the bouncy particle sampler and dimension-free convergence rates
- An adaptive multiple-try Metropolis algorithm
- Diffusion limits of the random walk Metropolis algorithm in high dimensions
- Minimising MCMC variance via diffusion limits, with an application to simulated tempering
- Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics
- Bayesian computation: a summary of the current state, and samples backwards and forwards
- A hierarchical Bayesian approach for modeling the evolution of the 7-day moving average of the number of deaths by COVID-19
- Optimal scaling of random walk Metropolis algorithms with discontinuous target densities
- Optimal scaling of random-walk Metropolis algorithms on general target distributions
- Interacting Langevin diffusions: gradient structure and ensemble Kalman sampler
- Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
- Adaptive Gibbs samplers and related MCMC methods
- The random walk Metropolis: linking theory and practice through a case study
- Optimal scaling for random walk Metropolis on spherically constrained target densities
- On the efficiency of pseudo-marginal random walk Metropolis algorithms
- Hierarchical models: local proposal variances for RWM-within-Gibbs and MALA-within-Gibbs
- Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior
- Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits
- Bayesian computational methods for estimation of two-parameters Weibull distribution in presence of right-censored data
- Optimal scaling of random walk Metropolis algorithms with non-Gaussian proposals
- Scaling analysis of multiple-try MCMC methods
- Weak convergence and optimal tuning of the reversible jump algorithm
- Optimal scaling of the random walk Metropolis: general criteria for the 0.234 acceptance rule
- A Metropolis-class sampler for targets with non-convex support
- A Dirichlet form approach to MCMC optimal scaling
- Random walk Metropolis algorithm in high dimension with non-Gaussian target distributions
- An automatic robust Bayesian approach to principal component regression
- Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
- Optimal scaling of the random walk Metropolis algorithm under Lp mean differentiability
- Hierarchical models and tuning of random walk Metropolis algorithms
- Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions
- Non-stationary phase of the MALA algorithm
- Asymptotic variance for random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation
- Convergence of Metropolis-type algorithms for a large canonical ensemble
- Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit
This page was built for publication: Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2467602)