Rates of convergence of the Hastings and Metropolis algorithms

From MaRDI portal
Publication:1922398

DOI10.1214/aos/1033066201zbMath0854.60065OpenAlexW1973594349WikidataQ56994734 ScholiaQ56994734MaRDI QIDQ1922398

Richard L. Tweedie, Kerrie L. Mengersen

Publication date: 5 January 1997

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aos/1033066201



Related Items

Exponential inequalities for unbounded functions of geometrically ergodic Markov chains: applications to quantitative error bounds for regenerative Metropolis algorithms, On the convergence rates of some adaptive Markov chain Monte Carlo algorithms, Note on the Sampling Distribution for the Metropolis-Hastings Algorithm, MCMC Algorithms for Posteriors on Matrix Spaces, Markov Chain Importance Sampling—A Highly Efficient Estimator for MCMC, Unnamed Item, Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits, Improved Sampling‐Importance Resampling and Reduced Bias Importance Sampling, Numerical Results for the Metropolis Algorithm, Nonasymptotic Bounds on the Mean Square Error for MCMC Estimates via Renewal Techniques, On the convergence rate of the elitist genetic algorithm based on mutation probability, State estimation for aoristic models, Convergence of Position-Dependent MALA with Application to Conditional Simulation in GLMMs, Spectral gaps and error estimates for infinite-dimensional Metropolis-Hastings with non-Gaussian priors, Analysis of a Class of Multilevel Markov Chain Monte Carlo Algorithms Based on Independent Metropolis–Hastings, Neuronized Priors for Bayesian Sparse Linear Regression, Finding our way in the dark: approximate MCMC for approximate Bayesian methods, A large deviation principle for the empirical measures of Metropolis-Hastings chains, Scalable Optimization-Based Sampling on Function Space, Estimation and inference by stochastic optimization, Exact convergence analysis for metropolis–hastings independence samplers in Wasserstein distances, Comparison of hit-and-run, slice sampler and random walk Metropolis, Bayesian Inverse Problems with $l_1$ Priors: A Randomize-Then-Optimize Approach, Pathwise accuracy and ergodicity of metropolized integrators for SDEs, Adaptive MCMC methods for inference on affine stochastic volatility models with jumps, Unnamed Item, Unnamed Item, Convergence of Conditional Metropolis-Hastings Samplers, Markov Chain Monte Carlo for Computing Rare-Event Probabilities for a Heavy-Tailed Random Walk, On the Use of Local Optimizations within Metropolis–Hastings Updates, Markov-chain monte carlo: Some practical implications of theoretical results, Approximating Hidden Gaussian Markov Random Fields, On the geometric ergodicity of Metropolis-Hastings algorithms, A Bayesian inference approach to the ill-posed Cauchy problem of steady-state heat conduction, Coupling a stochastic approximation version of EM with an MCMC procedure, Perfect Samplers for Mixtures of Distributions, Efficiency and Convergence Properties of Slice Samplers, Improving Convergence of the Hastings–Metropolis Algorithm with an Adaptive Proposal, A note on geometric ergodicity and floating-point roundoff error, Ergodicity of Markov chain Monte Carlo with reversible proposal, Exact bound for the convergence of metropolis chains, Theoretical and numerical comparison of some sampling methods for molecular dynamics, Bayesian computation: a summary of the current state, and samples backwards and forwards, Annealing evolutionary stochastic approximation Monte Carlo for global optimization, Component-wise Markov chain Monte Carlo: uniform and geometric ergodicity under mixing and composition, Pseudo-perfect and adaptive variants of the Metropolis–Hastings algorithm with an independent candidate density, Simple conditions for metastability of continuous Markov chains, Sensitivity and convergence of uniformly ergodic Markov chains, Conductance bounds on the L2 convergence rate of Metropolis algorithms on unbounded state spaces, Estimation of risk contributions with MCMC, Unnamed Item, Using a Markov Chain to Construct a Tractable Approximation of an Intractable Probability Distribution, Metropolis Integration Schemes for Self-Adjoint Diffusions, Limit theorems for sequential MCMC methods, Annealing evolutionary stochastic approximation Monte Carlo for global optimization, Maximum Likelihood Estimation of a Multi-Dimensional Log-Concave Density, Particle Markov Chain Monte Carlo Methods, Computation of Expectations by Markov Chain Monte Carlo Methods, Unnamed Item, Stochastic gradient descent and fast relaxation to thermodynamic equilibrium: A stochastic control approach, Using simulation methods for bayesian econometric models: inference, development,and communication, Slow convergence of the Gibbs sampler, Polynomial convergence rates of Markov chains, Geometric ergodicity of Metropolis algorithms, On the influence of the proposal distributions on a reversible jump MCMC algorithm applied to the detection of multiple change-points, Which ergodic averages have finite asymptotic variance?, Approximate Bayesian computation by modelling summary statistics in a quasi-likelihood framework, Practical drift conditions for subgeometric rates of convergence., A new class of stochastic EM algorithms. Escaping local maxima and handling intractable sampling, Information-geometric Markov chain Monte Carlo methods using diffusions, Convergence rates of two-component MCMC samplers, Oracle lower bounds for stochastic gradient sampling algorithms, Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference, On the theoretical properties of the exchange algorithm, Exact convergence analysis of the independent Metropolis-Hastings algorithms, Saddlepoint approximation for the generalized inverse Gaussian Lévy process, Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals, A computable bound of the essential spectral radius of finite range metropolis-Hastings kernels, Batch means and spectral variance estimators in Markov chain Monte Carlo, Annealing stochastic approximation Monte Carlo algorithm for neural network training, Improving the convergence of reversible samplers, Variance bounding of delayed-acceptance kernels, Stability of noisy Metropolis-Hastings, On the ergodicity properties of some adaptive MCMC algorithms, Directed hybrid random networks mixing preferential attachment with uniform attachment mechanisms, Multiplicative random walk Metropolis-Hastings on the real line, Parameter estimation of population pharmacokinetic models with stochastic differential equations: implementation of an estimation algorithm, Rates of convergence of the Hastings and Metropolis algorithms, Convergence properties of the Gibbs sampler for perturbations of Gaussians, On a Metropolis-Hastings importance sampling estimator, Dimension-Independent MCMC Sampling for Inverse Problems with Non-Gaussian Priors, Central limit theorem for triangular arrays of non-homogeneous Markov chains, Sequentially interacting Markov chain Monte Carlo methods, On the rate of convergence of the Gibbs sampler for the 1-D Ising model by geometric bound, Markov chain Monte Carlo: can we trust the third significant figure?, Optimal scaling of random-walk Metropolis algorithms on general target distributions, A patch that imparts unconditional stability to explicit integrators for Langevin-like equations, Numerical simulation of polynomial-speed convergence phenomenon, The random walk Metropolis: linking theory and practice through a case study, Fluctuations of interacting Markov chain Monte Carlo methods, Metropolis-Hastings algorithms with acceptance ratios of nearly 1, Stability of partially implicit Langevin schemes and their MCMC variants, A short history of Markov chain Monte Carlo: Subjective recollections from incomplete data, Adaptive Gibbs samplers and related MCMC methods, First hitting time analysis of the independence Metropolis sampler, Parallel hierarchical sampling: a general-purpose interacting Markov chains Monte Carlo algorithm, Remarks on the speed of convergence of mixing coefficients and applications, A Metropolis-Hastings based method for sampling from the \(G\)-Wishart distribution in Gaussian graphical models, Perturbation theory for Markov chains via Wasserstein distance, Convergence of adaptive and interacting Markov chain Monte Carlo algorithms, Data augmentation, frequentist estimation, and the Bayesian analysis of multinomial logit models, A central limit theorem for adaptive and interacting Markov chains, Markovian stochastic approximation with expanding projections, Honest exploration of intractable probability distributions via Markov chain Monte Carlo., A geometric interpretation of the Metropolis-Hastings algorithm., Geometric ergodicity for Bayesian shrinkage models, Variance bounding Markov chains, Geometric ergodicity of a Metropolis-Hastings algorithm for Bayesian inference of phylogenetic branch lengths, Convergence of Metropolis-type algorithms for a large canonical ensemble, Extremal indices, geometric ergodicity of Markov chains and MCMC, Estimating drift and minorization coefficients for Gibbs sampling algorithms, Perturbation bounds for Monte Carlo within metropolis via restricted approximations, On the effectiveness of Monte Carlo for initial uncertainty forecasting in nonlinear dynamical systems, Mixture of transmuted Pareto distribution: properties, applications and estimation under Bayesian framework, Fast mixing of Metropolis-Hastings with unimodal targets, A Monte Carlo integration approach to estimating drift and minorization coefficients for Metropolis-Hastings samplers, Convergence of contrastive divergence algorithm in exponential family, On convergence of properly weighted samples to the target distribution, Irreducibility and geometric ergodicity of Hamiltonian Monte Carlo, Fourier transform MCMC, heavy-tailed distributions, and geometric ergodicity, Quantitative non-geometric convergence bounds for independence samplers, Interacting Markov chain Monte Carlo methods for solving nonlinear measure-valued equations, f-SAEM: a fast stochastic approximation of the EM algorithm for nonlinear mixed effects models, Quantitative bounds on convergence of time-inhomogeneous Markov chains, Accelerating diffusions, Variable transformation to obtain geometric ergodicity in the random-walk Metropolis algorithm, Exponential concentration inequalities for additive functionals of Markov chains, Bayesian Analysis of Growth Curves Using Mixed Models Defined by Stochastic Differential Equations, Convergence of adaptive mixtures of importance sampling schemes, Attaining the optimal Gaussian diffusion acceleration, Bayesian Latent Variable Models for Median Regression on Multiple Outcomes, MCMC convergence diagnosis via multivariate bounds on log-concave densities, Approximation and sampling of multivariate probability distributions in the tensor train decomposition, On the geometrical convergence of Gibbs sampler in \(\mathbb R^d\), A Metropolis-class sampler for targets with non-convex support, Micro-local analysis for the Metropolis algorithm, Geometric convergence of the Metropolis-Hastings simulation algorithm, What do we know about the Metropolis algorithm?, Geometric ergodicity of Gibbs and block Gibbs samplers for a hierarchical random effects model, Large deviations for the empirical measure of the zig-zag process, Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings, A Bayesian approach to continuous type principal-agent problems, \(V\)-subgeometric ergodicity for a Hastings-Metropolis algorithm, Polynomial ergodicity of Markov transition kernels., Information bounds for Gibbs samplers, Deep composition of tensor-trains using squared inverse Rosenblatt transports, Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms, Bounds on regeneration times and convergence rates for Markov chains, Bayesian analysis of the logit model and comparison of two Metropolis-Hastings strategies., Exact and computationally efficient Bayesian inference for generalized Markov modulated Poisson processes, Variance reduction for additive functionals of Markov chains via martingale representations, Perfect sampling from independent Metropolis-Hastings chains



Cites Work