Efficient data augmentation techniques for some classes of state space models

From MaRDI portal
Publication:6111471

DOI10.1214/22-STS867arXiv1712.08887OpenAlexW4312238537MaRDI QIDQ6111471FDOQ6111471


Authors: Linda S. L. Tan Edit this on Wikidata


Publication date: 7 July 2023

Published in: Statistical Science (Search for Journal in Brave)

Abstract: Data augmentation improves the convergence of iterative algorithms, such as the EM algorithm and Gibbs sampler by introducing carefully designed latent variables. In this article, we first propose a data augmentation scheme for the first-order autoregression plus noise model, where optimal values of working parameters introduced for recentering and rescaling of the latent states, can be derived analytically by minimizing the fraction of missing information in the EM algorithm. The proposed data augmentation scheme is then utilized to design efficient Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of some non-Gaussian and nonlinear state space models, via a mixture of normals approximation coupled with a block-specific reparametrization strategy. Applications on simulated and benchmark real datasets indicate that the proposed MCMC sampler can yield improvements in simulation efficiency compared with centering, noncentering and even the ancillarity-sufficiency interweaving strategy.


Full work available at URL: https://arxiv.org/abs/1712.08887







Cites Work


Cited In (1)





This page was built for publication: Efficient data augmentation techniques for some classes of state space models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6111471)