High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
DOI10.1016/j.acha.2021.12.005zbMath1501.41006arXiv2012.07205OpenAlexW4200096621MaRDI QIDQ2118396
Jonathan W. Siegel, Jin-Chao Xu
Publication date: 22 March 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.07205
Nontrigonometric harmonic analysis involving wavelets and other special systems (42C40) Sobolev spaces and other spaces of ``smooth functions, embedding theorems, trace theorems (46E35) Numerical interpolation (65D05) Approximation by other special function classes (41A30)
Related Items (10)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Approximation results for neural network operators activated by sigmoidal functions
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- On best approximation by ridge functions
- Random approximants and neural networks
- Approximation rates for neural networks with general activation functions
- Approximation by series of sigmoidal functions with applications to neural networks
- Error bounds for approximations with deep ReLU networks
- Quasiorthogonal dimension of Euclidean spaces
- Interpolation polynomials on the triangle
- Lower bounds of the discretization error for piecewise polynomials
- Best approximation by ridge functions in L p -spaces
- Spline Functions on Triangulations
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Ten Lectures on Wavelets
- Approximation by Ridge Functions and Neural Networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Bounds on rates of variable-basis and neural-network approximation
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Finite Neuron Method and Convergence Analysis
- Sparse grids
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Convex Analysis
- Triangular Elements in the Finite Element Method
- 0n the best approximation by ridge functions in the uniform norm
This page was built for publication: High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions