A note on Metropolis-Hastings kernels for general state spaces
From MaRDI portal
Publication:1296614
DOI10.1214/AOAP/1027961031zbMath0935.60053OpenAlexW2114964853MaRDI QIDQ1296614
Publication date: 4 May 2000
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aoap/1027961031
Related Items (only showing first 100 items - show all)
Markov chain decomposition for convergence rate analysis ⋮ Which ergodic averages have finite asymptotic variance? ⋮ The use of a single pseudo-sample in approximate Bayesian computation ⋮ Diffusivity in multiple scattering systems ⋮ Designing simple and efficient Markov chain Monte Carlo proposal kernels ⋮ Convergence rates of two-component MCMC samplers ⋮ On the theoretical properties of the exchange algorithm ⋮ Dimension-independent likelihood-informed MCMC ⋮ Improving the convergence of reversible samplers ⋮ Delayed Acceptance ABC-SMC ⋮ FEM-based discretization-invariant MCMC methods for PDE-constrained Bayesian inverse problems ⋮ Markov chain Monte Carlo and irreversibility ⋮ Variance bounding of delayed-acceptance kernels ⋮ Geometric ergodicity of a more efficient conditional Metropolis-Hastings algorithm ⋮ Transdimensional approximate Bayesian computation for inference on invasive species models with latent variables of unknown dimension ⋮ Elementary bounds on mixing times for decomposable Markov chains ⋮ On a Metropolis-Hastings importance sampling estimator ⋮ Robust Bayesian model selection for heavy-tailed linear regression using finite mixtures ⋮ Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors ⋮ On hitting time, mixing time and geometric interpretations of Metropolis-Hastings reversiblizations ⋮ Geometric MCMC for infinite-dimensional inverse problems ⋮ A note on variance bounding for continuous time Markov chains ⋮ Equation-solving estimator based on the general n-step MHDR algorithm ⋮ Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance ⋮ Scalable Optimization-Based Sampling on Function Space ⋮ Variational principles for asymptotic variance of general Markov processes ⋮ CLTs and asymptotic variance of time-sampled Markov chains ⋮ Hierarchical Bayesian level set inversion ⋮ A Bayesian Approach to Estimating Background Flows from a Passive Scalar ⋮ Convergence rate of Markov chain methods for genomic motif discovery ⋮ Perturbation theory for Markov chains via Wasserstein distance ⋮ Comparison of hit-and-run, slice sampler and random walk Metropolis ⋮ Markov chain Monte Carlo algorithms with sequential proposals ⋮ Nonparametric survival regression using the beta-Stacy process ⋮ Efficient MCMC for Gibbs random fields using pre-computation ⋮ The Wang-Landau algorithm reaches the flat histogram criterion in finite time ⋮ Minimising MCMC variance via diffusion limits, with an application to simulated tempering ⋮ Informed Proposals for Local MCMC in Discrete Spaces ⋮ Impact of Routeing on Correlation Strength in Stationary Queueing Network Processes ⋮ On the convergence time of some non-reversible Markov chain Monte Carlo methods ⋮ Comparison of asymptotic variances of inhomogeneous Markov chains with application to Markov chain Monte Carlo methods ⋮ A central limit theorem for adaptive and interacting Markov chains ⋮ Adaptive dimension reduction to accelerate infinite-dimensional geometric Markov chain Monte Carlo ⋮ Honest exploration of intractable probability distributions via Markov chain Monte Carlo. ⋮ Ordering and improving the performance of Monte Carlo Markov chains. ⋮ Spectral gaps for a Metropolis-Hastings algorithm in infinite dimensions ⋮ Variance bounding Markov chains ⋮ A stable manifold MCMC method for high dimensions ⋮ Noisy gradient flow from a random walk in Hilbert space ⋮ Determining white noise forcing from Eulerian observations in the Navier-Stokes equation ⋮ Informed reversible jump algorithms ⋮ Bayesian parameter identification for Turing systems on stationary and evolving domains ⋮ An Adaptive Independence Sampler MCMC Algorithm for Bayesian Inferences of Functions ⋮ Perturbation bounds for Monte Carlo within metropolis via restricted approximations ⋮ Counterexamples for optimal scaling of Metropolis-Hastings chains with rough target densities ⋮ Algorithms for improving efficiency of discrete Markov chains ⋮ Importance sampling: intrinsic dimension and computational cost ⋮ On a generalization of the preconditioned Crank-Nicolson metropolis algorithm ⋮ Weak convergence and optimal tuning of the reversible jump algorithm ⋮ Proposals which speed up function-space MCMC ⋮ Maximin design on non hypercube domains and kernel interpolation ⋮ Convergence of Conditional Metropolis-Hastings Samplers ⋮ Numerical integration using V-uniformly ergodic Markov chains ⋮ Up-and-down experiments of first and second order ⋮ Bayesian Inference for Non-Gaussian Ornstein–Uhlenbeck Stochastic Volatility Processes ⋮ Efficient Construction of Reversible Jump Markov Chain Monte Carlo Proposal Distributions ⋮ An Extension of the Metropolis Algorithm ⋮ Efficiency and Convergence Properties of Slice Samplers ⋮ Generalized darting Monte Carlo ⋮ The pseudo-marginal approach for efficient Monte Carlo computations ⋮ Extra chance generalized hybrid Monte Carlo ⋮ Variational principles of hitting times for non-reversible Markov chains ⋮ Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains ⋮ Parallel and interacting Markov chain Monte Carlo algorithm ⋮ Metropolis-Hastings reversiblizations of non-reversible Markov chains ⋮ On particle Gibbs sampling ⋮ Bayesian inversion of a diffusion model with application to biology ⋮ Improving efficiency of data augmentation algorithms using Peskun's theorem ⋮ Correlation formulas for Markovian network processes in a random environment ⋮ Two-scale coupling for preconditioned Hamiltonian Monte Carlo in infinite dimensions ⋮ Approximation and sampling of multivariate probability distributions in the tensor train decomposition ⋮ Peskun-Tierney ordering for Markovian Monte Carlo: beyond the reversible scenario ⋮ Hastings-Metropolis algorithms and reference measures ⋮ A Metropolis-class sampler for targets with non-convex support ⋮ Noisy Hamiltonian Monte Carlo for Doubly Intractable Distributions ⋮ Conductance bounds on the L2 convergence rate of Metropolis algorithms on unbounded state spaces ⋮ MALA with annealed proposals: a generalization of locally and globally balanced proposal distributions ⋮ Accelerated Dimension-Independent Adaptive Metropolis ⋮ Non-stationary phase of the MALA algorithm ⋮ Two Metropolis--Hastings Algorithms for Posterior Measures with Non-Gaussian Priors in Infinite Dimensions ⋮ Wang-Landau algorithm: an adapted random walk to boost convergence ⋮ On the geometric ergodicity of Hamiltonian Monte Carlo ⋮ Adaptive random neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian variable selection ⋮ Importance sampling for families of distributions ⋮ Metropolis Integration Schemes for Self-Adjoint Diffusions ⋮ Mixing rates for Hamiltonian Monte Carlo algorithms in finite and infinite dimensions ⋮ Diffusion limit for the random walk Metropolis algorithm out of stationarity ⋮ Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms ⋮ Statistical Image Analysis for a Confocal Microscopy Two-Dimensional Section of Cartilage Growth ⋮ Variational formulas for asymptotic variance of general discrete-time Markov chains
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reversible jump Markov chain Monte Carlo computation and Bayesian model determination
- Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions
- Bayesian computation and stochastic systems. With comments and reply.
- Markov chains for exploring posterior distributions. (With discussion)
- Optimum Monte-Carlo sampling using Markov chains
- Monte Carlo sampling methods using Markov chains and their applications
This page was built for publication: A note on Metropolis-Hastings kernels for general state spaces