Scalable inference for Markov processes with intractable likelihoods

From MaRDI portal
Publication:5963547

DOI10.1007/S11222-014-9524-7zbMATH Open1331.62065arXiv1403.6886OpenAlexW2147639966WikidataQ59410638 ScholiaQ59410638MaRDI QIDQ5963547FDOQ5963547


Authors: J. Owen, Darren J. Wilkinson, Colin S. Gillespie Edit this on Wikidata


Publication date: 22 February 2016

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: Bayesian inference for Markov processes has become increasingly relevant in recent years. Problems of this type often have intractable likelihoods and prior knowledge about model rate parameters is often poor. Markov Chain Monte Carlo (MCMC) techniques can lead to exact inference in such models but in practice can suffer performance issues including long burn-in periods and poor mixing. On the other hand approximate Bayesian computation techniques can allow rapid exploration of a large parameter space but yield only approximate posterior distributions. Here we consider the combined use of approximate Bayesian computation (ABC) and MCMC techniques for improved computational efficiency while retaining exact inference on parallel hardware.


Full work available at URL: https://arxiv.org/abs/1403.6886




Recommendations




Cites Work


Cited In (10)





This page was built for publication: Scalable inference for Markov processes with intractable likelihoods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5963547)