Visualizing and understanding sum-product networks
From MaRDI portal
(Redirected from Publication:669309)
Abstract: Sum-Product Networks (SPNs) are recently introduced deep tractable probabilistic models by which several kinds of inference queries can be answered exactly and in a tractable time. Up to now, they have been largely used as black box density estimators, assessed only by comparing their likelihood scores only. In this paper we explore and exploit the inner representations learned by SPNs. We do this with a threefold aim: first we want to get a better understanding of the inner workings of SPNs; secondly, we seek additional ways to evaluate one SPN model and compare it against other probabilistic models, providing diagnostic tools to practitioners; lastly, we want to empirically evaluate how good and meaningful the extracted representations are, as in a classic Representation Learning framework. In order to do so we revise their interpretation as deep neural networks and we propose to exploit several visualization techniques on their node activations and network outputs under different types of inference queries. To investigate these models as feature extractors, we plug some SPNs, learned in a greedy unsupervised fashion on image datasets, in supervised classification learning tasks. We extract several embedding types from node activations by filtering nodes by their type, by their associated feature abstraction level and by their scope. In a thorough empirical comparison we prove them to be competitive against those generated from popular feature extractors as Restricted Boltzmann Machines. Finally, we investigate embeddings generated from random probabilistic marginal queries as means to compare other tractable probabilistic models on a common ground, extending our experiments to Mixtures of Trees.
Recommendations
Cites work
- A differential approach to inference in Bayesian networks
- An efficient learning procedure for deep Boltzmann machines
- On the expressive power of deep architectures
- Probabilistic graphical models.
- Reducing the Dimensionality of Data with Neural Networks
- Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion
Cited in
(8)- A hierarchy of sum-product networks using robustness
- Sum-product graphical models
- On converting sum-product networks into Bayesian networks
- Robustifying sum-product networks
- Efficient algorithms for robustness analysis of maximum a posteriori inference in selective sum-product networks
- Resolving inconsistencies of scope interpretations in sum-product networks
- \textsc{Strudel}: A fast and accurate learner of structured-decomposable probabilistic circuits
- Learning directed acyclic graph SPNs in sub-quadratic time
This page was built for publication: Visualizing and understanding sum-product networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q669309)