Analysis of a generalised expectation-maximisation algorithm for Gaussian mixture models: a control systems perspective

From MaRDI portal
Publication:5043530

DOI10.1080/00207179.2021.1931964zbMATH Open1500.93051arXiv1903.00979OpenAlexW3163480284MaRDI QIDQ5043530FDOQ5043530

Sarthak Chatterjee, Sérgio Pequito, Orlando Romero

Publication date: 6 October 2022

Published in: International Journal of Control (Search for Journal in Brave)

Abstract: The Expectation-Maximization (EM) algorithm is one of the most popular methods used to solve the problem of parametric distribution-based clustering in unsupervised learning. In this paper, we propose to analyze a generalized EM (GEM) algorithm in the context of Gaussian mixture models, where the maximization step in the EM is replaced by an increasing step. We show that this GEM algorithm can be understood as a linear time-invariant (LTI) system with a feedback nonlinearity. Therefore, we explore some of its convergence properties by leveraging tools from robust control theory. Lastly, we explain how the proposed GEM can be designed, and present a pedagogical example to understand the advantages of the proposed approach.


Full work available at URL: https://arxiv.org/abs/1903.00979




Recommendations




Cites Work






This page was built for publication: Analysis of a generalised expectation-maximisation algorithm for Gaussian mixture models: a control systems perspective

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5043530)