Learning mixtures of Bernoulli templates by two-round EM with performance guarantee
From MaRDI portal
Publication:489161
DOI10.1214/14-EJS981zbMATH Open1303.62037MaRDI QIDQ489161FDOQ489161
Authors: Adrian Barbu, Tianfu Wu, Ying Nian Wu
Publication date: 27 January 2015
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Abstract: Dasgupta and Shulman showed that a two-round variant of the EM algorithm can learn mixture of Gaussian distributions with near optimal precision with high probability if the Gaussian distributions are well separated and if the dimension is sufficiently high. In this paper, we generalize their theory to learning mixture of high-dimensional Bernoulli templates. Each template is a binary vector, and a template generates examples by randomly switching its binary components independently with a certain probability. In computer vision applications, a binary vector is a feature map of an image, where each binary component indicates whether a local feature or structure is present or absent within a certain cell of the image domain. A Bernoulli template can be considered as a statistical model for images of objects (or parts of objects) from the same category. We show that the two-round EM algorithm can learn mixture of Bernoulli templates with near optimal precision with high probability, if the Bernoulli templates are sufficiently different and if the number of features is sufficiently high. We illustrate the theoretical results by synthetic and real examples.
Full work available at URL: https://arxiv.org/abs/1305.0319
Recommendations
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Image analysis in multivariate analysis (62H35) Computing methodologies for image processing (68U10)
Cites Work
- Estimating the dimension of a model
- Title not available (Why is that?)
- Model-Based Clustering, Discriminant Analysis, and Density Estimation
- Title not available (Why is that?)
- Exploratory latent structure analysis using both identifiable and unidentifiable models
- Learning active basis models by EM-type algorithms
- Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression
- A stochastic grammar of images
Cited In (2)
This page was built for publication: Learning mixtures of Bernoulli templates by two-round EM with performance guarantee
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q489161)