Optimal scalings for local Metropolis-Hastings chains on nonproduct targets in high dimensions

From MaRDI portal
Publication:2389596

DOI10.1214/08-AAP563zbMath1172.60328arXiv0908.0865MaRDI QIDQ2389596

Andrew M. Stuart, Gareth O. Roberts, Alexandros Beskos

Publication date: 17 July 2009

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0908.0865



Related Items

Asymptotic analysis of the random walk metropolis algorithm on ridged densities, A blocking scheme for dimension-robust Gibbs sampling in large-scale image deblurring, On the convergence of adaptive sequential Monte Carlo methods, Designing simple and efficient Markov chain Monte Carlo proposal kernels, An adaptive multiple-try Metropolis algorithm, Uncertainty Quantification in Graph-Based Classification of High Dimensional Data, Hierarchical models: local proposal variances for RWM-within-Gibbs and MALA-within-Gibbs, Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions, Optimal strategies for the control of autonomous vehicles in data assimilation, Scaling Up Bayesian Uncertainty Quantification for Inverse Problems Using Deep Neural Networks, Large-Scale Bayesian Spatial-Temporal Regression with Application to Cardiac MR-Perfusion Imaging, Optimal scaling of random-walk Metropolis algorithms on general target distributions, Convergence of unadjusted Hamiltonian Monte Carlo for mean-field models, Diffusion limits of the random walk Metropolis algorithm in high dimensions, Optimal tuning of the hybrid Monte Carlo algorithm, A Bayesian Approach to Estimating Background Flows from a Passive Scalar, Minimising MCMC variance via diffusion limits, with an application to simulated tempering, Localization for MCMC: sampling high-dimensional posterior distributions with local structure, Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior, Spectral gaps for a Metropolis-Hastings algorithm in infinite dimensions, MALA-within-Gibbs Samplers for High-Dimensional Distributions with Sparse Conditional Structure, An Adaptive Independence Sampler MCMC Algorithm for Bayesian Inferences of Functions, On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference, Efficient strategy for the Markov chain Monte Carlo in high-dimension with heavy-tailed target probability distribution, Weak convergence and optimal tuning of the reversible jump algorithm, Optimal Scaling of the Random Walk Metropolis: General Criteria for the 0.234 Acceptance Rule, Hierarchical models and tuning of random walk Metropolis algorithms, Hybrid Monte Carlo on Hilbert spaces, Optimal scaling of the random walk Metropolis algorithm under Lp mean differentiability, On the stability of sequential Monte Carlo methods in high dimensions, Asymptotic variance for random walk Metropolis chains in high dimensions: logarithmic growth via the Poisson equation, Sequential Monte Carlo methods for Bayesian elliptic inverse problems, Bayesian computation: a summary of the current state, and samples backwards and forwards, Scaling analysis of delayed rejection MCMC methods, Random walk Metropolis algorithm in high dimension with non-Gaussian target distributions, Some Remarks on Preconditioning Molecular Dynamics, Efficiency of delayed-acceptance random walk metropolis algorithms, Anytime parallel tempering, Accelerated Dimension-Independent Adaptive Metropolis, Approximate large-scale Bayesian spatial modeling with application to quantitative magnetic resonance imaging, On the efficiency of pseudo-marginal random walk Metropolis algorithms, Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit, Decreasing Flow Uncertainty in Bayesian Inverse Problems Through Lagrangian Drifter Control



Cites Work