Learning mixtures of Bernoulli templates by two-round EM with performance guarantee

From MaRDI portal
Publication:489161

DOI10.1214/14-EJS981zbMATH Open1303.62037arXiv1305.0319MaRDI QIDQ489161FDOQ489161


Authors: Adrian Barbu, Tianfu Wu, Ying Nian Wu Edit this on Wikidata


Publication date: 27 January 2015

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Abstract: Dasgupta and Shulman showed that a two-round variant of the EM algorithm can learn mixture of Gaussian distributions with near optimal precision with high probability if the Gaussian distributions are well separated and if the dimension is sufficiently high. In this paper, we generalize their theory to learning mixture of high-dimensional Bernoulli templates. Each template is a binary vector, and a template generates examples by randomly switching its binary components independently with a certain probability. In computer vision applications, a binary vector is a feature map of an image, where each binary component indicates whether a local feature or structure is present or absent within a certain cell of the image domain. A Bernoulli template can be considered as a statistical model for images of objects (or parts of objects) from the same category. We show that the two-round EM algorithm can learn mixture of Bernoulli templates with near optimal precision with high probability, if the Bernoulli templates are sufficiently different and if the number of features is sufficiently high. We illustrate the theoretical results by synthetic and real examples.


Full work available at URL: https://arxiv.org/abs/1305.0319




Recommendations




Cites Work


Cited In (2)





This page was built for publication: Learning mixtures of Bernoulli templates by two-round EM with performance guarantee

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q489161)