Deep Gaussian mixture models

From MaRDI portal
Publication:88400

DOI10.1007/S11222-017-9793-ZzbMATH Open1430.62143arXiv1711.06929OpenAlexW2768611535MaRDI QIDQ88400FDOQ88400


Authors: Cinzia Viroli, Geoffrey J. Mclachlan, Geoffrey J. McLachlan Edit this on Wikidata


Publication date: 1 December 2017

Published in: Statistics and Computing (Search for Journal in Brave)

Abstract: Deep learning is a hierarchical inference method formed by subsequent multiple layers of learning able to more efficiently describe complex relationships. In this work, Deep Gaussian Mixture Models are introduced and discussed. A Deep Gaussian Mixture model (DGMM) is a network of multiple layers of latent variables, where, at each layer, the variables follow a mixture of Gaussian distributions. Thus, the deep mixture model consists of a set of nested mixtures of linear models, which globally provide a nonlinear model able to describe the data in a very flexible way. In order to avoid overparameterized solutions, dimension reduction by factor models can be applied at each layer of the architecture thus resulting in deep mixtures of factor analysers.


Full work available at URL: https://arxiv.org/abs/1711.06929




Recommendations




Cites Work


Cited In (14)

Uses Software





This page was built for publication: Deep Gaussian mixture models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q88400)