Approximating Continuous Functions by ReLU Nets of Minimal Width

From MaRDI portal
Publication:6293271

arXiv1710.11278MaRDI QIDQ6293271FDOQ6293271

Boris Hanin, Mark Sellke

Publication date: 30 October 2017

Abstract: This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed dingeq1, what is the minimal width w so that neural nets with ReLU activations, input dimension din, hidden layer widths at most w, and arbitrary depth can approximate any continuous, real-valued function of din variables arbitrarily well? It turns out that this minimal width is exactly equal to din+1. That is, if all the hidden layer widths are bounded by din, then even in the infinite depth limit, ReLU nets can only express a very limited class of functions, and, on the other hand, any continuous function on the din-dimensional unit cube can be approximated to arbitrary precision by ReLU nets in which all hidden layers have width exactly din+1. Our construction in fact shows that any continuous function f:[0,1]dinomathbbRdout can be approximated by a net of width din+dout. We obtain quantitative depth estimates for such an approximation in terms of the modulus of continuity of f.












This page was built for publication: Approximating Continuous Functions by ReLU Nets of Minimal Width

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6293271)