Convergence of adaptive and interacting Markov chain Monte Carlo algorithms

From MaRDI portal
Publication:449997

DOI10.1214/11-AOS938zbMATH Open1246.65003arXiv1203.3036OpenAlexW2069605605MaRDI QIDQ449997FDOQ449997


Authors: Gersende Fort, Eric Moulines, P. Priouret Edit this on Wikidata


Publication date: 3 September 2012

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458--475]. It also covers the case when the target distribution pi is sampled by using Markov transition kernels with a stationary distribution that differs from pi.


Full work available at URL: https://arxiv.org/abs/1203.3036




Recommendations




Cites Work


Cited In (46)





This page was built for publication: Convergence of adaptive and interacting Markov chain Monte Carlo algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q449997)