Deep Network Approximation Characterized by Number of Neurons
From MaRDI portal
Publication:5162359
DOI10.4208/cicp.OA-2020-0149OpenAlexW3101996726MaRDI QIDQ5162359
Haizhao Yang, Zuowei Shen, Shijun Zhang
Publication date: 2 November 2021
Published in: Communications in Computational Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.05497
modulus of continuityparallel computingHölder continuityapproximation theorylow-dimensional manifolddeep ReLU neural networks
Artificial neural networks and deep learning (68T07) Rate of convergence, degree of approximation (41A25)
Related Items (30)
Discovery of subdiffusion problem with noisy data via deep learning ⋮ An Augmented Lagrangian Deep Learning Method for Variational Problems with Essential Boundary Conditions ⋮ A Deep Learning Method for Elliptic Hemivariational Inequalities ⋮ Full error analysis for the training of deep neural networks ⋮ The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation ⋮ Approximation bounds for norm constrained neural networks with applications to regression and GANs ⋮ Simultaneous neural network approximation for smooth functions ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ Neural network approximation: three hidden layers are enough ⋮ On the capacity of deep generative networks for approximating distributions ⋮ Convergence of deep convolutional neural networks ⋮ A Deep Generative Approach to Conditional Sampling ⋮ The Kolmogorov-Arnold representation theorem revisited ⋮ Approximation Analysis of Convolutional Neural Networks ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ SignReLU neural network and its approximation ability ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs ⋮ Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors ⋮ Deep learning via dynamical systems: an approximation perspective ⋮ Approximation in shift-invariant spaces with deep ReLU neural networks ⋮ Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations ⋮ Deep Network Approximation for Smooth Functions ⋮ Optimal approximation rate of ReLU networks in terms of width and depth ⋮ Stochastic Markov gradient descent and training low-bit neural networks ⋮ The Gap between Theory and Practice in Function Approximation with Deep Neural Networks ⋮ Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth ⋮ Spline representation and redundancies of one-dimensional ReLU neural network models ⋮ Nonlinear approximation and (deep) ReLU networks ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation
Uses Software
Cites Work
- Unnamed Item
- Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
- Constructive approximate interpolation by neural networks
- Nonlinear approximation using Gaussian kernels
- Random projections of smooth manifolds
- Lower bounds for approximation by MLP neural networks
- Efficient distribution-free learning of probabilistic concepts
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- Multilayer feedforward networks are universal approximators
- Provable approximation properties for deep neural networks
- Approximation results in Orlicz spaces for sequences of Kantorovich MAX-product neural network operators
- Approximation of functions of finite variation by superpositions of a sigmoidal function.
- Multivariate \(n\)-term rational and piecewise polynomial approximation
- The rate of approximation of Gaussian radial basis neural networks in continuous function space
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonlinear approximation via compositions
- Nonparametric regression using deep neural networks with ReLU activation function
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Almost optimal estimates for approximation and learning by radial basis function networks
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- Convergence for a family of neural network operators in Orlicz spaces
- Approximation using scattered shifts of a multivariate function
- Universal approximation bounds for superpositions of a sigmoidal function
- Extension of range of functions
- Matching pursuits with time-frequency dictionaries
- Solving high-dimensional partial differential equations using deep learning
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Deep Network Approximation Characterized by Number of Neurons