Asynchronous and Distributed Data Augmentation for Massive Data Settings
From MaRDI portal
Publication:6180720
Abstract: Data augmentation (DA) algorithms are widely used for Bayesian inference due to their simplicity. In massive data settings, however, DA algorithms are prohibitively slow because they pass through the full data in any iteration, imposing serious restrictions on their usage despite the advantages. Addressing this problem, we develop a framework for extending any DA that exploits asynchronous and distributed computing. The extended DA algorithm is indexed by a parameter and is called Asynchronous and Distributed (AD) DA with the original DA as its parent. Any ADDA starts by dividing the full data into smaller disjoint subsets and storing them on processes, which could be machines or processors. Every iteration of ADDA augments only an -fraction of the data subsets with some positive probability and leaves the remaining -fraction of the augmented data unchanged. The parameter draws are obtained using the -fraction of new and -fraction of old augmented data. For many choices of and , the fractional updates of ADDA lead to a significant speed-up over the parent DA in massive data settings, and it reduces to the distributed version of its parent DA when . We show that the ADDA Markov chain is Harris ergodic with the desired stationary distribution under mild conditions on the parent DA algorithm. We demonstrate the numerical advantages of the ADDA in three representative examples corresponding to different kinds of massive data settings encountered in applications. In all these examples, our DA generalization is significantly faster than its parent DA algorithm for all the choices of and . We also establish geometric ergodicity of the ADDA Markov chain for all three examples, which in turn yields asymptotically valid standard errors for estimates of desired posterior quantities.
Cites work
- A hybrid scan Gibbs sampler for Bayesian models with latent variables
- An Asynchronous Distributed Expectation Maximization Algorithm for Massive Data: The DEM Algorithm
- Analysis of the Pólya-gamma block Gibbs sampler for Bayesian logistic linear mixed models
- Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables
- Communication-efficient distributed statistical inference
- Distributed Bayesian Inference in Linear Mixed-Effects Models
- Distributed algorithms for topic models
- Divide-and-conquer Bayesian inference in hidden Markov models
- Double-parallel Monte Carlo for Bayesian analysis of big data
- Fast Monte Carlo Markov chains for Bayesian shrinkage models with random effects
- Fast moment-based estimation for hierarchical models
- Fixed-Width Output Analysis for Markov Chain Monte Carlo
- Geometric ergodicity of Gibbs samplers for Bayesian general linear mixed models with proper priors
- Geometric ergodicity of Pólya-Gamma Gibbs sampler for Bayesian logistic regression with a flat prior
- Geometric ergodicity of the Bayesian Lasso
- Honest exploration of intractable probability distributions via Markov chain Monte Carlo.
- Implementing random scan Gibbs samplers
- MCMC for imbalanced categorical data
- Multivariate output analysis for Markov chain Monte Carlo
- Online expectation maximization based algorithms for inference in hidden Markov models
- Scalable Bayes via barycenter in Wasserstein space
- Simple, scalable and accurate posterior interval estimation
- Stochastic gradient Markov chain Monte Carlo
- The Pólya-gamma Gibbs sampler for Bayesian logistic regression is uniformly ergodic
- Uncertainty Quantification for Modern High-Dimensional Regression via Scalable Bayesian Methods
This page was built for publication: Asynchronous and Distributed Data Augmentation for Massive Data Settings
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6180720)