Rates of approximation by ReLU shallow neural networks
From MaRDI portal
Publication:6062171
DOI10.1016/j.jco.2023.101784zbMath1524.68322arXiv2307.12461MaRDI QIDQ6062171
No author found.
Publication date: 30 November 2023
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2307.12461
Artificial neural networks and deep learning (68T07) Rate of convergence, degree of approximation (41A25)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimum Sobolev norm interpolation with trigonometric polynomials on the torus
- Multilayer feedforward networks are universal approximators
- Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions
- Provable approximation properties for deep neural networks
- Optimal nonlinear approximation
- Approximation properties of a multilayered feedforward artificial neural network
- Weak convergence and empirical processes. With applications to statistics
- Theory of deep convolutional neural networks. II: Spherical analysis
- Modeling interactive components by coordinate kernel polynomial models
- Approximation of functions from korobov spaces by deep convolutional neural networks
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Theory of deep convolutional neural networks: downsampling
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- On convergence and growth of partial sums of Fourier series
- Deep vs. shallow networks: An approximation theory perspective
- Universal approximation bounds for superpositions of a sigmoidal function
- Deep distributed convolutional neural networks: Universality
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Learning theory of minimum error entropy under weak moment conditions
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Regularization schemes for minimum error entropy principle
- Thresholded spectral algorithms for sparse approximations
- Breaking the Curse of Dimensionality with Convex Neural Networks
- A Fast Learning Algorithm for Deep Belief Nets
- On the convergence of multiple Fourier series
- Approximation by superpositions of a sigmoidal function
- Theory of deep convolutional neural networks. III: Approximating radial functions