Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

From MaRDI portal
Publication:6289928

DOI10.3390/MATH7100992arXiv1708.02691WikidataQ127028917 ScholiaQ127028917MaRDI QIDQ6289928FDOQ6289928


Authors: Boris Hanin Edit this on Wikidata


Publication date: 8 August 2017

Abstract: This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wextmin(d) so that ReLU nets of width wextmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0,1]d aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? Our approach to this paper is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width d+1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [0,1]d by ReLU nets with width d+3.













This page was built for publication: Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6289928)