Optimal approximation rate of ReLU networks in terms of width and depth
DOI10.1016/J.MATPUR.2021.07.009zbMATH Open1501.41010arXiv2103.00502OpenAlexW3185971845MaRDI QIDQ2065073FDOQ2065073
Authors: Zuowei Shen, Haizhao Yang, Shijun Zhang
Publication date: 7 January 2022
Published in: Journal de Mathématiques Pures et Appliquées. Neuvième Série (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.00502
Recommendations
- Deep network approximation characterized by number of neurons
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- On sharpness of an error bound for deep ReLU network approximation
- Deep network with approximation error being reciprocal of width to power of square root of depth
Artificial neural networks and deep learning (68T07) Multidimensional problems (41A63) Rate of convergence, degree of approximation (41A25) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Approximation rates for neural networks with general activation functions
- Efficient distribution-free learning of probabilistic concepts
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Make \(\ell_1\) regularization effective in training sparse CNN
- A priori estimates of the population risk for two-layer neural networks
- Stochastic modified equations and dynamics of stochastic gradient algorithms. I: Mathematical foundations
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Exponential convergence of the deep neural network approximation for analytic functions
- Nonlinear approximation via compositions
- A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Deep network approximation characterized by number of neurons
- Neural network approximation: three hidden layers are enough
Cited In (25)
- Gauss Newton method for solving variational problems of PDEs with neural network discretizaitons
- Rates of approximation by ReLU shallow neural networks
- Deep Network Approximation for Smooth Functions
- Towards Lower Bounds on the Depth of ReLU Neural Networks
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors
- Error bounds for ReLU networks with depth and width parameters
- Deep learning via dynamical systems: an approximation perspective
- How do noise tails impact on deep ReLU networks?
- On sharpness of an error bound for deep ReLU network approximation
- Active learning based sampling for high-dimensional nonlinear partial differential equations
- Computing ground states of Bose-Einstein condensation by normalized deep neural network
- Deep network approximation characterized by number of neurons
- Weighted variation spaces and approximation by shallow ReLU networks
- DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS
- ReLU neural networks of polynomial size for exact maximum flow computation
- Universal regular conditional distributions via probabilistic transformers
- Solving PDEs on unknown manifolds with machine learning
- Side effects of learning from low-dimensional data embedded in a Euclidean space
- Mini-workshop: Mathematics of entropic AI in the natural sciences. Abstracts from the mini-workshop held April 7--12, 2024
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- Greedy training algorithms for neural networks and applications to PDEs
- Approximation results on nonlinear operators by \(P_p\)-statistical convergence
- Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
This page was built for publication: Optimal approximation rate of ReLU networks in terms of width and depth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2065073)