Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels

From MaRDI portal
Publication:2631344

DOI10.1007/S11222-014-9521-XzbMATH Open1342.60122arXiv1403.5496OpenAlexW1964607942MaRDI QIDQ2631344FDOQ2631344


Authors: Pierre Alquier, Nial Friel, Richard G. Everitt, Aidan Boland Edit this on Wikidata


Publication date: 29 July 2016

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: Monte Carlo algorithms often aim to draw from a distribution pi by simulating a Markov chain with transition kernel P such that pi is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation hatP. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the chain given by the transition kernel hatP is to the chain given by P. We apply these results to several examples from spatial statistics and network analysis.


Full work available at URL: https://arxiv.org/abs/1403.5496




Recommendations




Cites Work


Cited In (48)





This page was built for publication: Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2631344)