On the relative expressiveness of Bayesian and neural networks
From MaRDI portal
Publication:2302782
Abstract: A neural network computes a function. A central property of neural networks is that they are "universal approximators:" for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by marginal Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their marginal queries to become "universal approximators."
Recommendations
Cites work
- scientific article; zbMATH DE number 1149408 (Why is no real title available?)
- scientific article; zbMATH DE number 1149420 (Why is no real title available?)
- A Fast Learning Algorithm for Deep Belief Nets
- A differential approach to inference in Bayesian networks
- A logical calculus of the ideas immanent in nervous activity
- An analysis of first-order logics of probability
- Approximation by superpositions of a sigmoidal function
- Connectionist learning of belief networks
- Deep learning
- Modeling and Reasoning with Bayesian Networks
- Multilayer feedforward networks are universal approximators
- Noisy-or classifier
- On the revision of probabilistic beliefs using uncertain evidence
- Probabilistic logic
Cited in
(4)
This page was built for publication: On the relative expressiveness of Bayesian and neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2302782)