Vector operations for accelerating expensive Bayesian computations - a tutorial guide
From MaRDI portal
Publication:6202922
DOI10.1214/21-BA1265arXiv1902.09046OpenAlexW3112705869MaRDI QIDQ6202922FDOQ6202922
Authors: David J. Warne, S. A. Sisson, C. C. Drovandi
Publication date: 27 February 2024
Published in: Bayesian Analysis (Search for Journal in Brave)
Abstract: Many applications in Bayesian statistics are extremely computationally intensive. However, they are often inherently parallel, making them prime targets for modern massively parallel processors. Multi-core and distributed computing is widely applied in the Bayesian community, however, very little attention has been given to fine-grain parallelisation using single instruction multiple data (SIMD) operations that are available on most modern commodity CPUs and is the basis of GPGPU computing. In this work, we practically demonstrate, using standard programming libraries, the utility of the SIMD approach for several topical Bayesian applications. We show that SIMD can improve the floating point arithmetic performance resulting in up to improvement in serial algorithm performance. Importantly, these improvements are multiplicative to any gains achieved through multi-core processing. We illustrate the potential of SIMD for accelerating Bayesian computations and provide the reader with techniques for exploiting modern massively parallel processing environments using standard tools.
Full work available at URL: https://arxiv.org/abs/1902.09046
sequential Monte Carloapproximate Bayesian computationsingle instruction multiple datavectorisationadvanced vector extensionsweakly informative priors
Cites Work
- Julia: a fresh approach to numerical computing
- Generalized autoregressive conditional heteroscedasticity
- Sequential Monte Carlo Samplers
- Massive Parallelization Boosts Big Bayesian Multidimensional Scaling
- Title not available (Why is that?)
- A comparative review of dimension reduction methods in approximate Bayesian computation
- Particle Markov Chain Monte Carlo Methods
- Sequential Monte Carlo without likelihoods
- Prior distributions for variance parameters in hierarchical models (Comment on article by Browne and Draper)
- A sequential particle filter method for static models
- On the convergence of adaptive sequential Monte Carlo methods
- Continuous Markov processes and stochastic equations
- Weak informativity and the information in one prior relative to another
- SIMD parallel MCMC sampling with applications for big-data Bayesian analytics
- Approximation of Bayesian predictive \(p\)-values with regression ABC
- The rate of convergence for approximate Bayesian computation
- Multilevel rejection sampling for approximate Bayesian computation
- Recalibration: a post-processing method for approximate Bayesian computation
- Bayesian computation: a summary of the current state, and samples backwards and forwards
- Bayesian parametric bootstrap for models with intractable likelihoods
- GPU-accelerated Gibbs sampling: a case study of the horseshoe probit model
- Sequential Monte Carlo samplers with independent Markov chain Monte Carlo proposals
- Multilevel Monte Carlo in approximate Bayesian computation
- Bad environments, good environments: a non-Gaussian asymmetric volatility model
Cited In (1)
This page was built for publication: Vector operations for accelerating expensive Bayesian computations - a tutorial guide
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6202922)