On sharpness of an error bound for deep ReLU network approximation
From MaRDI portal
Publication:2143606
DOI10.1007/S43670-022-00020-YzbMath1505.41007OpenAlexW4293084530MaRDI QIDQ2143606
Publication date: 31 May 2022
Published in: Sampling Theory, Signal Processing, and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s43670-022-00020-y
neural networksuniform boundedness principlerates of convergencecounterexamplessharpness of error bounds
Artificial neural networks and deep learning (68T07) Best approximation, Chebyshev systems (41A50) Rate of convergence, degree of approximation (41A25) Neural nets and related approaches to inference from stochastic processes (62M45)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks
- Quantitative extensions of the uniform boundedness principle
- Multilayer feedforward networks are universal approximators
- Approximation properties of a multilayered feedforward artificial neural network
- Optimal approximation rate of ReLU networks in terms of width and depth
- Error bounds for approximations with deep ReLU networks
- Deep vs. shallow networks: An approximation theory perspective
- On uniform boundedness principles and banach - steinhaus theorems with rates
- Deep Neural Network Approximation Theory
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Approximation by superpositions of a sigmoidal function
- On sharpness of error bounds for multivariate neural network approximation
This page was built for publication: On sharpness of an error bound for deep ReLU network approximation