Approximation bounds for convolutional neural networks in operator learning

From MaRDI portal
Publication:6403941

DOI10.1016/J.NEUNET.2023.01.029arXiv2207.01546OpenAlexW4318159342MaRDI QIDQ6403941FDOQ6403941


Authors: Nicola Rares Franco, Stefania Fresca, Andrea Manzoni, Paolo Zunino Edit this on Wikidata


Publication date: 4 July 2022

Abstract: Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input onto a functional output , and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application.


Full work available at URL: https://doi.org/10.1016/j.neunet.2023.01.029






Cites Work


Cited In (6)





This page was built for publication: Approximation bounds for convolutional neural networks in operator learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6403941)