Nonlinear approximation and (deep) ReLU networks
From MaRDI portal
Publication:2117331
Artificial neural networks and deep learning (68T07) Neural networks for/in biological studies, artificial life and related topics (92B20) Approximation by other special function classes (41A30) Rate of convergence, degree of approximation (41A25) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) Neural nets applied to problems in time-dependent statistical mechanics (82C32)
Abstract: This article is concerned with the approximation and expressive powers of deep neural networks. This is an active research area currently producing many interesting papers. The results most commonly found in the literature prove that neural networks approximate functions with classical smoothness to the same accuracy as classical linear methods of approximation, e.g. approximation by polynomials or by piecewise polynomials on prescribed partitions. However, approximation by neural networks depending on n parameters is a form of nonlinear approximation and as such should be compared with other nonlinear methods such as variable knot splines or n-term approximation from dictionaries. The performance of neural networks in targeted applications such as machine learning indicate that they actually possess even greater approximation power than these traditional methods of nonlinear approximation. The main results of this article prove that this is indeed the case. This is done by exhibiting large classes of functions which can be efficiently captured by neural networks where classical nonlinear methods fall short of the task. The present article purposefully limits itself to studying the approximation of univariate functions by ReLU networks. Many generalizations to functions of several variables and other activation functions can be envisioned. However, even in this simplest of settings considered here, a theory that completely quantifies the approximation power of neural networks is still lacking.
Recommendations
- Error bounds for approximations with deep ReLU networks
- A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
- Approximation spaces of deep neural networks
- Provable approximation properties for deep neural networks
- Deep ReLU networks and high-order finite element methods
Cites work
- scientific article; zbMATH DE number 4075444 (Why is no real title available?)
- scientific article; zbMATH DE number 1215245 (Why is no real title available?)
- scientific article; zbMATH DE number 477682 (Why is no real title available?)
- scientific article; zbMATH DE number 1889798 (Why is no real title available?)
- Approximation by superpositions of a sigmoidal function
- Deep Network Approximation for Smooth Functions
- Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ
- Deep network approximation characterized by number of neurons
- Deep vs. shallow networks: an approximation theory perspective
- Error bounds for approximations with deep ReLU networks
- Exponential convergence of the deep neural network approximation for analytic functions
- Multilayer feedforward networks are universal approximators
- Neural Networks for Localized Approximation
- Optimal approximation with sparsely connected deep neural networks
- Optimal nonlinear approximation
- Provable approximation properties for deep neural networks
- The Takagi function: a survey
- Wavelet compression and nonlinear n-widths
- Weierstrass' function and chaos
Cited in
(85)- The construction and approximation of ReLU neural network operators
- SignReLU neural network and its approximation ability
- Error bounds for approximations with deep ReLU networks
- Constructive deep ReLU neural network approximation
- Why rectified linear activation functions? Why max-pooling? A possible explanation
- Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples
- On the SQH method for solving optimal control problems with non-smooth state cost functionals or constraints
- Exponential ReLU DNN expression of holomorphic maps in high dimension
- Error bounds for approximations using multichannel deep convolutional neural networks with downsampling
- Random neural networks in the infinite width limit as Gaussian processes
- A functional equation with polynomial solutions and application to neural networks
- Convergence rates of deep ReLU networks for multiclass classification
- A multivariate Riesz basis of ReLU neural networks
- A note on the applications of one primary function in deep neural networks
- Depth separations in neural networks: what is actually being separated?
- Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs
- Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
- Deep Neural Network Approximation Theory
- Approximation spaces of deep neural networks
- Adaptive two-layer ReLU neural network. I: Best least-squares approximation
- A global universality of two-layer neural networks with ReLU activations
- Neural parametric Fokker-Planck equation
- Deep ReLU networks and high-order finite element methods. II: Chebyšev emulation
- Theoretical issues in deep networks
- A convergent deep learning algorithm for approximation of polynomials
- Alternating minimization for regression with tropical rational functions
- Robust nonparametric regression based on deep ReLU neural networks
- Spline representation and redundancies of one-dimensional ReLU neural network models
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU neural networks in \(W^{s , p}\) norms
- Mesh-informed neural networks for operator learning in finite element spaces
- Dying ReLU and initialization: theory and numerical examples
- Machine learning design of volume of fluid schemes for compressible flows
- Expressivity of Deep Neural Networks
- Approximation capabilities of neural networks on unbounded domains
- ReLU networks are universal approximators via piecewise linear or constant functions
- Universal approximation with quadratic deep networks
- Neural network with unbounded activation functions is universal approximator
- Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs
- Convergence of deep convolutional neural networks
- Approximation error for neural network operators by an averaged modulus of smoothness
- Deep learning via dynamical systems: an approximation perspective
- A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
- On the latent dimension of deep autoencoders for reduced order modeling of PDEs parametrized by random fields
- Deep neural network surrogates for nonsmooth quantities of interest in shape uncertainty quantification
- Comparative studies on mesh-free deep neural network approach versus finite element method for solving coupled nonlinear hyperbolic/wave equations
- Deep learning-based approximation of Goldbach partition function
- ReLU deep neural networks from the hierarchical basis perspective
- Nonlinear approximation via compositions
- Information theory and recovery algorithms for data fusion in Earth observation
- scientific article; zbMATH DE number 683527 (Why is no real title available?)
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
- Estimating a regression function in exponential families by model selection
- Neural network approximation
- High-dimensional distribution generation through deep neural networks
- Connections between numerical algorithms for PDEs and neural networks
- Improving the expressive power of deep neural networks through integral activation transform
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Sampling complexity of deep approximation spaces
- Approximation of compositional functions with ReLU neural networks
- Best \(n\)-term approximation of diagonal operators and application to function spaces with mixed smoothness
- PowerNet: efficient representations of polynomials and smooth functions by deep neural networks with rectified power units
- ReLU neural networks of polynomial size for exact maximum flow computation
- Sparse Deep Neural Network for Nonlinear Partial Differential Equations
- Approximation properties of deep ReLU CNNs
- Universality of gradient descent neural network training
- On the approximation of rough functions with deep neural networks
- scientific article; zbMATH DE number 7626778 (Why is no real title available?)
- Simultaneous neural network approximation for smooth functions
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Designing rotationally invariant neural networks from PDEs and variational methods
- Neural ODE Control for Classification, Approximation, and Transport
- Neural networks with ReLU powers need less depth
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- Deep vs. shallow networks: an approximation theory perspective
- Thermodynamically consistent physics-informed neural networks for hyperbolic systems
- A mesh-free method using piecewise deep neural network for elliptic interface problems
- Exponential ReLU neural network approximation rates for point and edge singularities
- Deep ReLU networks and high-order finite element methods
- Transferable neural networks for partial differential equations
- Linearized two-layers neural networks in high dimension
- Expressive power of ReLU and step networks under floating-point operations
- Approximation in shift-invariant spaces with deep ReLU neural networks
- Optimal stable nonlinear approximation
This page was built for publication: Nonlinear approximation and (deep) ReLU networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117331)