Efficient data augmentation techniques for some classes of state space models
From MaRDI portal
Publication:6111471
Abstract: Data augmentation improves the convergence of iterative algorithms, such as the EM algorithm and Gibbs sampler by introducing carefully designed latent variables. In this article, we first propose a data augmentation scheme for the first-order autoregression plus noise model, where optimal values of working parameters introduced for recentering and rescaling of the latent states, can be derived analytically by minimizing the fraction of missing information in the EM algorithm. The proposed data augmentation scheme is then utilized to design efficient Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of some non-Gaussian and nonlinear state space models, via a mixture of normals approximation coupled with a block-specific reparametrization strategy. Applications on simulated and benchmark real datasets indicate that the proposed MCMC sampler can yield improvements in simulation efficiency compared with centering, noncentering and even the ancillarity-sufficiency interweaving strategy.
Cites work
- scientific article; zbMATH DE number 5280144 (Why is no real title available?)
- scientific article; zbMATH DE number 3567782 (Why is no real title available?)
- scientific article; zbMATH DE number 1085989 (Why is no real title available?)
- scientific article; zbMATH DE number 1086057 (Why is no real title available?)
- scientific article; zbMATH DE number 1086058 (Why is no real title available?)
- scientific article; zbMATH DE number 1086083 (Why is no real title available?)
- scientific article; zbMATH DE number 1522717 (Why is no real title available?)
- scientific article; zbMATH DE number 5223072 (Why is no real title available?)
- scientific article; zbMATH DE number 5263160 (Why is no real title available?)
- A comparison of centring parameterisations of Gaussian process-based models for Bayesian computation using MCMC
- A general framework for the parametrization of hierarchical models
- A quasi-Newton acceleration for high-dimensional optimization algorithms
- AN APPROACH TO TIME SERIES SMOOTHING AND FORECASTING USING THE EM ALGORITHM
- Acceleration of the EM algorithm via extrapolation methods: review, comparison and new methods
- Achieving shrinkage in a time-varying parameter model framework
- Analysis of financial time series
- Ancillarity-sufficiency interweaving strategy (ASIS) for boosting MCMC estimation of stochastic volatility models
- Augmentation schemes for particle MCMC
- Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data
- Auxiliary mixture sampling with applications to logistic models
- Bayesian analysis of the stochastic conditional duration model
- Damped Anderson Acceleration With Restarts and Monotonicity Control for Accelerating EM and EM-like Algorithms
- Data transforming augmentation for heteroscedastic models
- Dynamically Rescaled Hamiltonian Monte Carlo for Bayesian Hierarchical Models
- Efficient Bayesian Inference for Nonlinear State Space Models With Univariate Autoregressive State Equation
- Efficient Bayesian inference for stochastic time-varying copula models
- Efficient parametrisations for normal linear mixed models
- Explaining variational approximations
- Explicit inverse of tridiagonal matrix with applications in autoregressive modelling
- Fast EM-type Implementations for Mixed Effects Models
- Fast and accurate estimation of non-nested binomial hierarchical models using variational inference
- Filtering via Simulation: Auxiliary Particle Filters
- Importance Sampling-Based Transport Map Hamiltonian Monte Carlo for Bayesian Hierarchical Models
- Julia: a fresh approach to numerical computing
- Linear state-space models for blind source separation
- Linearly preconditioned nonlinear conjugate gradient acceleration of the PX-EM algorithm
- MCMC using Hamiltonian dynamics
- Maximum likelihood estimation via the ECM algorithm: A general framework
- Multilevel linear models, Gibbs samplers and multigrid decompositions (with discussion)
- Multivariate Stochastic Variance Models
- On particle methods for parameter estimation in state-space models
- On the convergence properties of the EM algorithm
- Parameter expansion to accelerate EM: the PX-EM algorithm
- Particle Markov Chain Monte Carlo Methods
- Particle learning and smoothing
- Riemann manifold Langevin and Hamiltonian Monte Carlo methods. With discussion and authors' reply
- Sequential Monte Carlo Methods in Practice
- Sequential Monte Carlo smoothing with parameter estimation
- State space mixed models for binary responses with scale mixture of normal distributions links
- Statistical modeling and computation
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models
- Stochastic model specification search for Gaussian and partial non-Gaussian state space models
- Stochastic volatility with leverage: fast and efficient likelihood inference
- The no-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo
- The stochastic conditional duration model: a latent variable model for the analysis of financial durations
- Time series analysis and its applications. With R examples
- Use of Model Reparametrization to Improve Variational Bayes
- Variational inference for generalized linear mixed models using partially noncentered parametrizations
This page was built for publication: Efficient data augmentation techniques for some classes of state space models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6111471)