Bayesian computation: a summary of the current state, and samples backwards and forwards

From MaRDI portal
Publication:5963784

DOI10.1007/S11222-015-9574-5zbMATH Open1331.62017arXiv1502.01148OpenAlexW639587122WikidataQ59409805 ScholiaQ59409805MaRDI QIDQ5963784FDOQ5963784

Krzysztof Łatuszyński, Marcelo Pereyra, P. J. Green, Christian Robert

Publication date: 23 February 2016

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: The past decades have seen enormous improvements in computational inference based on statistical models, with continual enhancement in a wide range of computational tools, in competition. In Bayesian inference, first and foremost, MCMC techniques continue to evolve, moving from random walk proposals to Langevin drift, to Hamiltonian Monte Carlo, and so on, with both theoretical and algorithmic inputs opening wider access to practitioners. However, this impressive evolution in capacity is confronted by an even steeper increase in the complexity of the models and datasets to be addressed. The difficulties of modelling and then handling ever more complex datasets most likely call for a new type of tool for computational inference that dramatically reduce the dimension and size of the raw data while capturing its essential aspects. Approximate models and algorithms may thus be at the core of the next computational revolution.


Full work available at URL: https://arxiv.org/abs/1502.01148





Cites Work


Cited In (37)

Uses Software






This page was built for publication: Bayesian computation: a summary of the current state, and samples backwards and forwards

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5963784)