Nonlinear approximation and (deep) ReLU networks
DOI10.1007/S00365-021-09548-ZzbMATH Open1501.41003arXiv1905.02199OpenAlexW3160447895MaRDI QIDQ2117331FDOQ2117331
Authors: Ingrid Daubechies, Ronald DeVore, Simon Foucart, B. Hanin, Guergana Petrova
Publication date: 21 March 2022
Published in: Constructive Approximation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.02199
Recommendations
- Error bounds for approximations with deep ReLU networks
- A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
- Approximation spaces of deep neural networks
- Provable approximation properties for deep neural networks
- Deep ReLU networks and high-order finite element methods
Artificial neural networks and deep learning (68T07) Neural networks for/in biological studies, artificial life and related topics (92B20) Approximation by other special function classes (41A30) Rate of convergence, degree of approximation (41A25) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) Neural nets applied to problems in time-dependent statistical mechanics (82C32)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Title not available (Why is that?)
- Optimal nonlinear approximation
- The Takagi function: a survey
- Weierstrass' function and chaos
- Wavelet compression and nonlinear \(n\)-widths
- Title not available (Why is that?)
- Error bounds for approximations with deep ReLU networks
- Neural Networks for Localized Approximation
- Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ
- Optimal approximation with sparsely connected deep neural networks
- Deep vs. shallow networks: an approximation theory perspective
- Provable approximation properties for deep neural networks
- Exponential convergence of the deep neural network approximation for analytic functions
- Deep Network Approximation for Smooth Functions
- Deep network approximation characterized by number of neurons
Cited In (84)
- Error bounds for approximations with deep ReLU networks
- Constructive deep ReLU neural network approximation
- Error bounds for approximations using multichannel deep convolutional neural networks with downsampling
- Why rectified linear activation functions? Why max-pooling? A possible explanation
- Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples
- On the SQH method for solving optimal control problems with non-smooth state cost functionals or constraints
- Exponential ReLU DNN expression of holomorphic maps in high dimension
- A functional equation with polynomial solutions and application to neural networks
- A note on the applications of one primary function in deep neural networks
- Convergence rates of deep ReLU networks for multiclass classification
- Depth separations in neural networks: what is actually being separated?
- Deep Neural Network Approximation Theory
- Approximation spaces of deep neural networks
- Neural parametric Fokker-Planck equation
- Adaptive two-layer ReLU neural network. I: Best least-squares approximation
- A global universality of two-layer neural networks with ReLU activations
- A convergent deep learning algorithm for approximation of polynomials
- Theoretical issues in deep networks
- Spline representation and redundancies of one-dimensional ReLU neural network models
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU neural networks in \(W^{s , p}\) norms
- Mesh-informed neural networks for operator learning in finite element spaces
- Dying ReLU and initialization: theory and numerical examples
- Expressivity of Deep Neural Networks
- Approximation capabilities of neural networks on unbounded domains
- Machine learning design of volume of fluid schemes for compressible flows
- ReLU networks are universal approximators via piecewise linear or constant functions
- Universal approximation with quadratic deep networks
- Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs
- Convergence of deep convolutional neural networks
- Neural network with unbounded activation functions is universal approximator
- Approximation error for neural network operators by an averaged modulus of smoothness
- Deep learning via dynamical systems: an approximation perspective
- A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
- Deep neural network surrogates for nonsmooth quantities of interest in shape uncertainty quantification
- Comparative studies on mesh-free deep neural network approach versus finite element method for solving coupled nonlinear hyperbolic/wave equations
- Deep learning-based approximation of Goldbach partition function
- ReLU deep neural networks from the hierarchical basis perspective
- Title not available (Why is that?)
- Nonlinear approximation via compositions
- Information theory and recovery algorithms for data fusion in Earth observation
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
- Neural network approximation
- Connections between numerical algorithms for PDEs and neural networks
- High-dimensional distribution generation through deep neural networks
- Approximation of compositional functions with ReLU neural networks
- ReLU neural networks of polynomial size for exact maximum flow computation
- PowerNet: efficient representations of polynomials and smooth functions by deep neural networks with rectified power units
- Sparse Deep Neural Network for Nonlinear Partial Differential Equations
- Best \(n\)-term approximation of diagonal operators and application to function spaces with mixed smoothness
- Universality of gradient descent neural network training
- Approximation properties of deep ReLU CNNs
- Title not available (Why is that?)
- Simultaneous neural network approximation for smooth functions
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Designing rotationally invariant neural networks from PDEs and variational methods
- Exponential ReLU neural network approximation rates for point and edge singularities
- Deep vs. shallow networks: an approximation theory perspective
- Thermodynamically consistent physics-informed neural networks for hyperbolic systems
- A mesh-free method using piecewise deep neural network for elliptic interface problems
- Deep ReLU networks and high-order finite element methods
- Approximation in shift-invariant spaces with deep ReLU neural networks
- Linearized two-layers neural networks in high dimension
- Optimal stable nonlinear approximation
- The construction and approximation of ReLU neural network operators
- Random neural networks in the infinite width limit as Gaussian processes
- A multivariate Riesz basis of ReLU neural networks
- Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
- Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs
- Deep ReLU networks and high-order finite element methods. II: Chebyšev emulation
- Alternating minimization for regression with tropical rational functions
- Robust nonparametric regression based on deep ReLU neural networks
- On the latent dimension of deep autoencoders for reduced order modeling of PDEs parametrized by random fields
- Estimating a regression function in exponential families by model selection
- Improving the expressive power of deep neural networks through integral activation transform
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Sampling complexity of deep approximation spaces
- Neural ODE Control for Classification, Approximation, and Transport
- Neural networks with ReLU powers need less depth
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- Transferable neural networks for partial differential equations
- Expressive power of ReLU and step networks under floating-point operations
- SignReLU neural network and its approximation ability
Uses Software
This page was built for publication: Nonlinear approximation and (deep) ReLU networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117331)