Convergence of a Particle-Based Approximation of the Block Online Expectation Maximization Algorithm

From MaRDI portal
Publication:4635204

DOI10.1145/2414416.2414418zbMATH Open1384.62066arXiv1111.1307OpenAlexW2033281746MaRDI QIDQ4635204FDOQ4635204

Gersende Fort, Sylvain Le Corff

Publication date: 16 April 2018

Published in: ACM Transactions on Modeling and Computer Simulation (Search for Journal in Brave)

Abstract: Online variants of the Expectation Maximization (EM) algorithm have recently been proposed to perform parameter inference with large data sets or data streams, in independent latent models and in hidden Markov models. Nevertheless, the convergence properties of these algorithms remain an open problem at least in the hidden Markov case. This contribution deals with a new online EM algorithm which updates the parameter at some deterministic times. Some convergence results have been derived even in general latent models such as hidden Markov models. These properties rely on the assumption that some intermediate quantities are available in closed form or can be approximated by Monte Carlo methods when the Monte Carlo error vanishes rapidly enough. In this paper, we propose an algorithm which approximates these quantities using Sequential Monte Carlo methods. The convergence of this algorithm and of an averaged version is established and their performance is illustrated through Monte Carlo experiments.


Full work available at URL: https://arxiv.org/abs/1111.1307






Cited In (5)






This page was built for publication: Convergence of a Particle-Based Approximation of the Block Online Expectation Maximization Algorithm

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4635204)