Approximation rates for neural networks with encodable weights in smoothness spaces
From MaRDI portal
Publication:2055067
Artificial neural networks and deep learning (68T07) Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Sobolev spaces and other spaces of ``smooth functions, embedding theorems, trace theorems (46E35) Rate of convergence, degree of approximation (41A25)
Abstract: We examine the necessary and sufficient complexity of neural networks to approximate functions from different smoothness spaces under the restriction of encodable network weights. Based on an entropy argument, we start by proving lower bounds for the number of nonzero encodable weights for neural network approximation in Besov spaces, Sobolev spaces and more. These results are valid for all sufficiently smooth activation functions. Afterwards, we provide a unifying framework for the construction of approximate partitions of unity by neural networks with fairly general activation functions. This allows us to approximate localized Taylor polynomials by neural networks and make use of the Bramble-Hilbert Lemma. Based on our framework, we derive almost optimal upper bounds in higher-order Sobolev norms. This work advances the theory of approximating solutions of partial differential equations by neural networks.
Recommendations
- Estimation of approximating rate for neural network in \(L^p_w\) spaces
- scientific article; zbMATH DE number 1182753
- scientific article; zbMATH DE number 1843047
- The geometric rate of approximation of neural networks in \(L^p\)-space
- Approximation rates for neural networks with general activation functions
- Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation
- On the near optimality of the stochastic approximation of smooth functions by neural networks
- scientific article; zbMATH DE number 1784858
- Bounds on rates of variable-basis and neural-network approximation
- Measure Theoretic Results for Approximation by Neural Networks with Limited Weights
Cites work
- scientific article; zbMATH DE number 3491650 (Why is no real title available?)
- scientific article; zbMATH DE number 3602126 (Why is no real title available?)
- scientific article; zbMATH DE number 2106999 (Why is no real title available?)
- scientific article; zbMATH DE number 1405266 (Why is no real title available?)
- scientific article; zbMATH DE number 962825 (Why is no real title available?)
- A Multivariate Faa di Bruno Formula with Applications
- A practical guide to splines.
- A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function
- Approximation and estimation bounds for artificial neural networks
- Approximation by superposition of sigmoidal and radial basis functions
- Approximation by superpositions of a sigmoidal function
- Approximation results for neural network operators activated by sigmoidal functions
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- DGM: a deep learning algorithm for solving partial differential equations
- Deep ReLU networks and high-order finite element methods
- Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Error bounds for approximations with deep ReLU networks
- Error bounds for approximations with deep ReLU neural networks in \(W^{s , p}\) norms
- Lectures on Pseudo-Differential Operators: Regularity Theorems and Applications to Non-Elliptic Problems. (MN-24)
- Lower bounds for approximation by MLP neural networks
- Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
- Multivariate neural network operators with sigmoidal activation functions
- Nonlinear partial differential equations with applications
- Nonparametric regression using deep neural networks with ReLU activation function
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Optimal approximation with sparsely connected deep neural networks
- Provable approximation properties for deep neural networks
- Solving high-dimensional partial differential equations using deep learning
- The Mathematical Theory of Finite Element Methods
- The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems
- The finite element methods for elliptic problems.
Cited in
(33)- On PDE characterization of smooth hierarchical functions computed by neural networks
- Error analysis of deep Ritz methods for elliptic equations
- A Rate of Convergence of Weak Adversarial Neural Networks for the Second Order Parabolic PDEs
- Convergence Analysis of a Quasi-Monte CarloBased Deep Learning Algorithm for Solving Partial Differential Equations
- Improved Analysis of PINNs: Alleviate the CoD for Compositional Solutions
- Interpolation and approximation via momentum ResNets and neural ODEs
- Deep learning based on randomized quasi-Monte Carlo method for solving linear Kolmogorov partial differential equation
- Solving Poisson problems in polygonal domains with singularity enriched physics informed neural networks
- Recovering the source term in elliptic equation via deep learning: method and convergence analysis
- Error analysis for deep neural network approximations of parametric hyperbolic conservation laws
- Current density impedance imaging with PINNs
- On the approximation of functions by tanh neural networks
- Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
- Mesh-informed neural networks for operator learning in finite element spaces
- Imaging conductivity from current density magnitude using neural networks
- Convergence analysis for over-parameterized deep learning
- Randomized neural network with Petrov-Galerkin methods for solving linear and nonlinear partial differential equations
- Stationary Density Estimation of Itô Diffusions Using Deep Learning
- Approximation error for neural network operators by an averaged modulus of smoothness
- Solving Elliptic Problems with Singular Sources Using Singularity Splitting Deep Ritz Method
- Construction and approximation for a class of feedforward neural networks with sigmoidal function
- Uniform approximation rates and metric entropy of shallow neural networks
- Convergence Analysis of the Deep Galerkin Method for Weak Solutions
- Neural Control of Parametric Solutions for High-Dimensional Evolution PDEs
- Nonclosedness of sets of neural networks in Sobolev spaces
- Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
- Convergence of Physics-Informed Neural Networks Applied to Linear Second-Order Elliptic Interface Problems
- Approximation rates for neural networks with general activation functions
- Asymptotic analysis of neural network operators employing the Hardy-Littlewood maximal inequality
- Error analysis of the mixed residual method for elliptic equations
- Simultaneous neural network approximation for smooth functions
- Error analysis for physics-informed neural networks (PINNs) approximating Kolmogorov PDEs
This page was built for publication: Approximation rates for neural networks with encodable weights in smoothness spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2055067)