On the relative expressiveness of Bayesian and neural networks

From MaRDI portal
Publication:2302782

DOI10.1016/J.IJAR.2019.07.008zbMATH Open1468.68097arXiv1812.08957OpenAlexW2966309298MaRDI QIDQ2302782FDOQ2302782

Adnan Darwiche, Arthur Choi, Ruocheng Wang

Publication date: 26 February 2020

Published in: International Journal of Approximate Reasoning (Search for Journal in Brave)

Abstract: A neural network computes a function. A central property of neural networks is that they are "universal approximators:" for a given continuous function, there exists a neural network that can approximate it arbitrarily well, given enough neurons (and some additional assumptions). In contrast, a Bayesian network is a model, but each of its queries can be viewed as computing a function. In this paper, we identify some key distinctions between the functions computed by neural networks and those by marginal Bayesian network queries, showing that the former are more expressive than the latter. Moreover, we propose a simple augmentation to Bayesian networks (a testing operator), which enables their marginal queries to become "universal approximators."


Full work available at URL: https://arxiv.org/abs/1812.08957




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: On the relative expressiveness of Bayesian and neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2302782)