Efficient inference in state-space models through adaptive learning in online Monte Carlo expectation maximization
From MaRDI portal
Publication:2203422
DOI10.1007/s00180-019-00937-4zbMath1505.62185arXiv1807.03265OpenAlexW2992026747WikidataQ98238517 ScholiaQ98238517MaRDI QIDQ2203422
Donna Henderson, Gerton Lunter
Publication date: 6 October 2020
Published in: Computational Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1807.03265
sequential Monte Carloonline estimationstochastic approximation expectation maximizationlatent variable model
Computational methods for problems pertaining to statistics (62-08) Monte Carlo methods (65C05) Stochastic approximation (62L20)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- AN APPROACH TO TIME SERIES SMOOTHING AND FORECASTING USING THE EM ALGORITHM
- Lookahead strategies for sequential Monte Carlo
- Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models
- Convergence of a stochastic approximation version of the EM algorithm
- Online expectation maximization based algorithms for inference in hidden Markov models
- Sequential Monte Carlo Methods in Practice
- On-Line Expectation–Maximization Algorithm for latent Data Models
- Particle filters and Bayesian inference in financial econometrics
- Online Learning with Hidden Markov Models
- Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm
- A stochastic approximation type EM algorithm for the mixture problem
- Filtering via Simulation: Auxiliary Particle Filters
- Conjugate Gradient Acceleration of the EM Algorithm
- Bayesian Analysis of Isochores