Error bounds for ReLU networks with depth and width parameters
From MaRDI portal
Publication:2111556
DOI10.1007/S13160-022-00515-0OpenAlexW4281936743MaRDI QIDQ2111556FDOQ2111556
Authors: Jae-Mo Kang, Sunghwan Moon
Publication date: 17 January 2023
Published in: Japan Journal of Industrial and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s13160-022-00515-0
Recommendations
- Error bounds for approximations with deep ReLU networks
- New error bounds for deep ReLU networks using sparse grids
- Error bounds for approximations with deep ReLU neural networks in \(W^{s , p}\) norms
- On sharpness of an error bound for deep ReLU network approximation
- Optimal approximation rate of ReLU networks in terms of width and depth
Learning and adaptive systems in artificial intelligence (68T05) Rate of convergence, degree of approximation (41A25)
Cites Work
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Functions, spaces, and expansions. Mathematical tools in physics and engineering
- Approximation and estimation bounds for artificial neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Deep vs. shallow networks: an approximation theory perspective
- Nonparametric regression using deep neural networks with ReLU activation function
- Theory of deep convolutional neural networks: downsampling
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Title not available (Why is that?)
Cited In (4)
- Towards Lower Bounds on the Depth of ReLU Neural Networks
- Rademacher complexity and the generalization error of residual networks
- Fast generalization error bound of deep learning without scale invariance of activation functions
- Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks
This page was built for publication: Error bounds for ReLU networks with depth and width parameters
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2111556)