Efficient data augmentation techniques for some classes of state space models
From MaRDI portal
Publication:6111471
DOI10.1214/22-STS867arXiv1712.08887OpenAlexW4312238537MaRDI QIDQ6111471FDOQ6111471
Authors: Linda S. L. Tan
Publication date: 7 July 2023
Published in: Statistical Science (Search for Journal in Brave)
Abstract: Data augmentation improves the convergence of iterative algorithms, such as the EM algorithm and Gibbs sampler by introducing carefully designed latent variables. In this article, we first propose a data augmentation scheme for the first-order autoregression plus noise model, where optimal values of working parameters introduced for recentering and rescaling of the latent states, can be derived analytically by minimizing the fraction of missing information in the EM algorithm. The proposed data augmentation scheme is then utilized to design efficient Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of some non-Gaussian and nonlinear state space models, via a mixture of normals approximation coupled with a block-specific reparametrization strategy. Applications on simulated and benchmark real datasets indicate that the proposed MCMC sampler can yield improvements in simulation efficiency compared with centering, noncentering and even the ancillarity-sufficiency interweaving strategy.
Full work available at URL: https://arxiv.org/abs/1712.08887
Markov chain Monte Carlodata augmentationEM algorithmstate space modelancillarity-sufficiency interweaving strategyreparametrizationstochastic volatility model
Cites Work
- MCMC using Hamiltonian dynamics
- Julia: a fresh approach to numerical computing
- Title not available (Why is that?)
- Ancillarity-sufficiency interweaving strategy (ASIS) for boosting MCMC estimation of stochastic volatility models
- Achieving shrinkage in a time-varying parameter model framework
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- Sequential Monte Carlo Methods in Practice
- Title not available (Why is that?)
- On the convergence properties of the EM algorithm
- Stochastic model specification search for Gaussian and partial non-Gaussian state space models
- Maximum likelihood estimation via the ECM algorithm: A general framework
- Title not available (Why is that?)
- Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models
- Stochastic volatility with leverage: fast and efficient likelihood inference
- Efficient parametrisations for normal linear mixed models
- AN APPROACH TO TIME SERIES SMOOTHING AND FORECASTING USING THE EM ALGORITHM
- Auxiliary mixture sampling with applications to logistic models
- Explaining variational approximations
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- A general framework for the parametrization of hierarchical models
- Filtering via Simulation: Auxiliary Particle Filters
- Particle Markov Chain Monte Carlo Methods
- Title not available (Why is that?)
- Title not available (Why is that?)
- Damped Anderson Acceleration With Restarts and Monotonicity Control for Accelerating EM and EM-like Algorithms
- Parameter expansion to accelerate EM: the PX-EM algorithm
- Multivariate Stochastic Variance Models
- Title not available (Why is that?)
- Title not available (Why is that?)
- Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data
- A quasi-Newton acceleration for high-dimensional optimization algorithms
- Efficient Bayesian inference for stochastic time-varying copula models
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Variational inference for generalized linear mixed models using partially noncentered parametrizations
- On particle methods for parameter estimation in state-space models
- Augmentation schemes for particle MCMC
- Particle learning and smoothing
- The stochastic conditional duration model: a latent variable model for the analysis of financial durations
- Fast EM-type Implementations for Mixed Effects Models
- State space mixed models for binary responses with scale mixture of normal distributions links
- Statistical modeling and computation
- A comparison of centring parameterisations of Gaussian process-based models for Bayesian computation using MCMC
- Analysis of financial time series
- Title not available (Why is that?)
- Bayesian analysis of the stochastic conditional duration model
- Title not available (Why is that?)
- Time series analysis and its applications. With R examples
- Linearly preconditioned nonlinear conjugate gradient acceleration of the PX-EM algorithm
- Acceleration of the EM algorithm via extrapolation methods: review, comparison and new methods
- Sequential Monte Carlo smoothing with parameter estimation
- Explicit inverse of tridiagonal matrix with applications in autoregressive modelling
- Efficient Bayesian Inference for Nonlinear State Space Models With Univariate Autoregressive State Equation
- Use of Model Reparametrization to Improve Variational Bayes
- Dynamically Rescaled Hamiltonian Monte Carlo for Bayesian Hierarchical Models
- Linear state-space models for blind source separation
- Data transforming augmentation for heteroscedastic models
- Importance Sampling-Based Transport Map Hamiltonian Monte Carlo for Bayesian Hierarchical Models
- Multilevel linear models, Gibbs samplers and multigrid decompositions (with discussion)
- Fast and accurate estimation of non-nested binomial hierarchical models using variational inference
Cited In (1)
This page was built for publication: Efficient data augmentation techniques for some classes of state space models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6111471)