Approximation bounds for convolutional neural networks in operator learning
From MaRDI portal
Publication:6403941
DOI10.1016/J.NEUNET.2023.01.029arXiv2207.01546OpenAlexW4318159342MaRDI QIDQ6403941FDOQ6403941
Authors: Nicola Rares Franco, Stefania Fresca, Andrea Manzoni, Paolo Zunino
Publication date: 4 July 2022
Abstract: Recently, deep Convolutional Neural Networks (CNNs) have proven to be successful when employed in areas such as reduced order modeling of parametrized PDEs. Despite their accuracy and efficiency, the approaches available in the literature still lack a rigorous justification on their mathematical foundations. Motivated by this fact, in this paper we derive rigorous error bounds for the approximation of nonlinear operators by means of CNN models. More precisely, we address the case in which an operator maps a finite dimensional input onto a functional output , and a neural network model is used to approximate a discretized version of the input-to-output map. The resulting error estimates provide a clear interpretation of the hyperparameters defining the neural network architecture. All the proofs are constructive, and they ultimately reveal a deep connection between CNNs and the Fourier transform. Finally, we complement the derived error bounds by numerical experiments that illustrate their application.
Full work available at URL: https://doi.org/10.1016/j.neunet.2023.01.029
Artificial neural networks and deep learning (68T07) Numerical analysis (65-XX) Approximations and expansions (41-XX)
Cites Work
- Reduced basis methods for partial differential equations. An introduction
- A Generalization of Hermite's Interpolation Formula
- Approximation by superpositions of a sigmoidal function
- Title not available (Why is that?)
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- Universality of deep convolutional neural networks
- Numerical approximation of parametrized problems in cardiac electrophysiology by a local reduced basis method
- A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs
- POD-DL-ROM: enhancing deep learning-based reduced order models for nonlinear parametrized PDEs by proper orthogonal decomposition
- Equivalence of approximation by convolutional neural networks and fully-connected networks
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Enhancing Accuracy of Deep Learning Algorithms by Training with Low-Discrepancy Sequences
- Title not available (Why is that?)
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
Cited In (6)
- Application of adjoint operators to neural learning
- Operator learning using random features: a tool for scientific computing
- Improved architectures and training algorithms for deep operator networks
- On the latent dimension of deep autoencoders for reduced order modeling of PDEs parametrized by random fields
- Application of deep learning reduced-order modeling for single-phase flow in faulted porous media
- On Lipschitz Bounds of General Convolutional Neural Networks
This page was built for publication: Approximation bounds for convolutional neural networks in operator learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6403941)