Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
From MaRDI portal
Publication:6289928
DOI10.3390/MATH7100992arXiv1708.02691WikidataQ127028917 ScholiaQ127028917MaRDI QIDQ6289928FDOQ6289928
Authors: Boris Hanin
Publication date: 8 August 2017
Abstract: This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width so that ReLU nets of width (and arbitrary depth) can approximate any continuous function on the unit cube aribitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? Our approach to this paper is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well-suited for representing convex functions. In particular, we prove that ReLU nets with width can approximate any continuous convex function of variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the -dimensional cube by ReLU nets with width
This page was built for publication: Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6289928)