Gaussian mixture models based on principal components and applications (Q2193297)
From MaRDI portal
![]() | This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Gaussian mixture models based on principal components and applications |
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Gaussian mixture models based on principal components and applications |
scientific article |
Statements
Gaussian mixture models based on principal components and applications (English)
0 references
25 August 2020
0 references
Summary: Data scientists use various machine learning algorithms to discover patterns in large data that can lead to actionable insights. In general, high-dimensional data are reduced by obtaining a set of principal components so as to highlight similarities and differences. In this work, we deal with the reduced data using a bivariate mixture model and learning with a bivariate Gaussian mixture model. We discuss a heuristic for detecting important components by choosing the initial values of location parameters using two different techniques: cluster means, \(k\)-means and hierarchical clustering, and default values in the ``mixtools'' R package. The parameters of the model are obtained via an expectation maximization algorithm. The criteria from Bayesian point are evaluated for both techniques, demonstrating that both techniques are efficient with respect to computation capacity. The effectiveness of the discussed techniques is demonstrated through a simulation study and using real data sets from different fields.
0 references