Optimal approximation rate of ReLU networks in terms of width and depth
From MaRDI portal
Publication:2065073
DOI10.1016/j.matpur.2021.07.009zbMath1501.41010arXiv2103.00502OpenAlexW3185971845MaRDI QIDQ2065073
Haizhao Yang, Shijun Zhang, Zuowei Shen
Publication date: 7 January 2022
Published in: Journal de Mathématiques Pures et Appliquées. Neuvième Série (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.00502
Artificial neural networks and deep learning (68T07) Multidimensional problems (41A63) Rate of convergence, degree of approximation (41A25) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Related Items (10)
DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS ⋮ On sharpness of an error bound for deep ReLU network approximation ⋮ Universal regular conditional distributions via probabilistic transformers ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ Side effects of learning from low-dimensional data embedded in a Euclidean space ⋮ Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors ⋮ Deep learning via dynamical systems: an approximation perspective ⋮ Greedy training algorithms for neural networks and applications to PDEs ⋮ Deep Network Approximation for Smooth Functions ⋮ Approximation results on nonlinear operators by \(P_p\)-statistical convergence
Cites Work
- Unnamed Item
- Make \(\ell_1\) regularization effective in training sparse CNN
- Efficient distribution-free learning of probabilistic concepts
- Multilayer feedforward networks are universal approximators
- Approximation rates for neural networks with general activation functions
- Exponential convergence of the deep neural network approximation for analytic functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonlinear approximation via compositions
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Deep Network Approximation Characterized by Number of Neurons
- A note on the expressive power of deep rectified linear unit networks in high‐dimensional spaces
- Approximation by superpositions of a sigmoidal function
- Neural network approximation: three hidden layers are enough
This page was built for publication: Optimal approximation rate of ReLU networks in terms of width and depth