Approximation rates for neural networks with encodable weights in smoothness spaces

From MaRDI portal
Publication:2055067

DOI10.1016/J.NEUNET.2020.11.010zbMATH Open1475.68314arXiv2006.16822OpenAlexW3107277102WikidataQ104456454 ScholiaQ104456454MaRDI QIDQ2055067FDOQ2055067


Authors: Ingo Gühring, Mones Raslan Edit this on Wikidata


Publication date: 3 December 2021

Published in: Neural Networks (Search for Journal in Brave)

Abstract: We examine the necessary and sufficient complexity of neural networks to approximate functions from different smoothness spaces under the restriction of encodable network weights. Based on an entropy argument, we start by proving lower bounds for the number of nonzero encodable weights for neural network approximation in Besov spaces, Sobolev spaces and more. These results are valid for all sufficiently smooth activation functions. Afterwards, we provide a unifying framework for the construction of approximate partitions of unity by neural networks with fairly general activation functions. This allows us to approximate localized Taylor polynomials by neural networks and make use of the Bramble-Hilbert Lemma. Based on our framework, we derive almost optimal upper bounds in higher-order Sobolev norms. This work advances the theory of approximating solutions of partial differential equations by neural networks.


Full work available at URL: https://arxiv.org/abs/2006.16822




Recommendations




Cites Work


Cited In (33)

Uses Software





This page was built for publication: Approximation rates for neural networks with encodable weights in smoothness spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2055067)