Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
From MaRDI portal
Publication:2389596
Abstract: We investigate local MCMC algorithms, namely the random-walk Metropolis and the Langevin algorithms, and identify the optimal choice of the local step-size as a function of the dimension of the state space, asymptotically as . We consider target distributions defined as a change of measure from a product law. Such structures arise, for instance, in inverse problems or Bayesian contexts when a product prior is combined with the likelihood. We state analytical results on the asymptotic behavior of the algorithms under general conditions on the change of measure. Our theory is motivated by applications on conditioned diffusion processes and inverse problems related to the 2D Navier--Stokes equation.
Recommendations
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Optimal scaling of random-walk Metropolis algorithms on general target distributions
- Optimal scaling for various Metropolis-Hastings algorithms.
- Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions
- Scaling Limits for the Transient Phase of Local Metropolis–Hastings Algorithms
Cites work
- scientific article; zbMATH DE number 3723610 (Why is no real title available?)
- scientific article; zbMATH DE number 1354815 (Why is no real title available?)
- scientific article; zbMATH DE number 1827006 (Why is no real title available?)
- scientific article; zbMATH DE number 840151 (Why is no real title available?)
- Analysis of SPDEs arising in path sampling. I: The Gaussian case
- From Metropolis to diffusions: Gibbs states and optimal scaling.
- Infinite-dimensional dynamical systems. An introduction to dissipative parabolic PDEs and the theory of global attractors
- MCMC METHODS FOR DIFFUSION BRIDGES
- MCMC methods for sampling function space
- Monte Carlo sampling methods using Markov chains and their applications
- Optimal Scaling of Discrete Approximations to Langevin Diffusions
- Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234
- Optimal scaling for partially updating MCMC algorithms
- Optimal scaling of MaLa for nonlinear regression.
- Optimal scaling of Metropolis algorithms: Heading toward general target distributions
- Optimal scaling of the random walk Metropolis on elliptically symmetric unimodal targets
- Scaling Limits for the Transient Phase of Local Metropolis–Hastings Algorithms
- Simulation of conditioned diffusion and application to parameter estimation
- Stochastic Equations in Infinite Dimensions
- Weak convergence and optimal scaling of random walk Metropolis algorithms
- Weak convergence of Metropolis algorithms for non-I.I.D. target distributions
Cited in
(45)- Optimal scaling of random-walk Metropolis algorithms on general target distributions
- Optimal scaling of the random walk Metropolis: general criteria for the 0.234 acceptance rule
- Anytime parallel tempering
- Scaling Limits for the Transient Phase of Local Metropolis–Hastings Algorithms
- Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior
- Hierarchical models: local proposal variances for RWM-within-Gibbs and MALA-within-Gibbs
- Efficiency of delayed-acceptance random walk metropolis algorithms
- Approximate large-scale Bayesian spatial modeling with application to quantitative magnetic resonance imaging
- On the convergence of adaptive sequential Monte Carlo methods
- Diffusion limits of the random walk Metropolis algorithm in high dimensions
- Large-scale Bayesian spatial-temporal regression with application to cardiac MR-perfusion imaging
- Optimal scaling of the random walk Metropolis algorithm under Lp mean differentiability
- Designing simple and efficient Markov chain Monte Carlo proposal kernels
- On the efficiency of pseudo-marginal random walk Metropolis algorithms
- Sequential Monte Carlo methods for Bayesian elliptic inverse problems
- MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure
- Weak convergence and optimal tuning of the reversible jump algorithm
- Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit
- Optimal strategies for the control of autonomous vehicles in data assimilation
- Hierarchical models and tuning of random walk Metropolis algorithms
- Decreasing flow uncertainty in Bayesian inverse problems through Lagrangian drifter control
- Asymptotic analysis of the random walk metropolis algorithm on ridged densities
- On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference
- Asymptotic variance for random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation
- An adaptive multiple-try Metropolis algorithm
- Convergence of unadjusted Hamiltonian Monte Carlo for mean-field models
- Spectral gaps for a Metropolis-Hastings algorithm in infinite dimensions
- Accelerated dimension-independent adaptive metropolis
- Some Remarks on Preconditioning Molecular Dynamics
- Localization for MCMC: sampling high-dimensional posterior distributions with local structure
- A blocking scheme for dimension-robust Gibbs sampling in large-scale image deblurring
- Hybrid Monte Carlo on Hilbert spaces
- Rate-optimal refinement strategies for local approximation MCMC
- Scaling analysis of delayed rejection MCMC methods
- Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions
- Optimal tuning of the hybrid Monte Carlo algorithm
- Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks
- Uncertainty quantification in graph-based classification of high dimensional data
- Random walk Metropolis algorithm in high dimension with non-Gaussian target distributions
- Minimising MCMC variance via diffusion limits, with an application to simulated tempering
- Bayesian computation: a summary of the current state, and samples backwards and forwards
- On the stability of sequential Monte Carlo methods in high dimensions
- Efficient strategy for the Markov chain Monte Carlo in high-dimension with heavy-tailed target probability distribution
- An Adaptive Independence Sampler MCMC Algorithm for Bayesian Inferences of Functions
- A Bayesian Approach to Estimating Background Flows from a Passive Scalar
This page was built for publication: Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2389596)