High-order approximation rates for shallow neural networks with cosine and ReLU^k activation functions
From MaRDI portal
Publication:2118396
Abstract: We study the approximation properties of shallow neural networks with an activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness in the spectral Barron space of the underlying function to be approximated. We show that as the smoothness index of increases, shallow neural networks with ReLU activation function obtain an improved approximation rate up to a best possible rate of in , independent of the dimension . The significance of this result is that the activation function ReLU is fixed independent of the dimension, while for classical methods the degree of polynomial approximation or the smoothness of the wavelets used would have to increase in order to take advantage of the dimension dependent smoothness of . In addition, we derive improved approximation rates for shallow neural networks with cosine activation function on the spectral Barron space. Finally, we prove lower bounds showing that the approximation rates attained are optimal under the given assumptions.
Recommendations
- Uniform approximation rates and metric entropy of shallow neural networks
- Approximation of smoothness classes by deep rectifier networks
- Error bounds for approximations with deep ReLU networks
- Approximation rates for neural networks with general activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Cites work
- scientific article; zbMATH DE number 3758364 (Why is no real title available?)
- scientific article; zbMATH DE number 3772326 (Why is no real title available?)
- scientific article; zbMATH DE number 42453 (Why is no real title available?)
- scientific article; zbMATH DE number 1215245 (Why is no real title available?)
- scientific article; zbMATH DE number 477682 (Why is no real title available?)
- scientific article; zbMATH DE number 671791 (Why is no real title available?)
- scientific article; zbMATH DE number 713342 (Why is no real title available?)
- 0n the best approximation by ridge functions in the uniform norm
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Approximation by Ridge Functions and Neural Networks
- Approximation by series of sigmoidal functions with applications to neural networks
- Approximation rates for neural networks with general activation functions
- Approximation results for neural network operators activated by sigmoidal functions
- Best approximation by ridge functions in \(L_p\)-spaces
- Bounds on rates of variable-basis and neural-network approximation
- Breaking the curse of dimensionality with convex neural networks
- Convex Analysis
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Error bounds for approximations with deep ReLU networks
- Finite neuron method and convergence analysis
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Interpolation polynomials on the triangle
- Lower bounds of the discretization error for piecewise polynomials
- On best approximation by ridge functions
- Quasiorthogonal dimension of Euclidean spaces
- Random approximants and neural networks
- Sparse grids
- Spline Functions on Triangulations
- Ten Lectures on Wavelets
- Triangular Elements in the Finite Element Method
- Universal approximation bounds for superpositions of a sigmoidal function
Cited in
(21)- Construction and approximation rate for feedforward neural network operators with sigmoidal functions
- Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Uniform approximation rates and metric entropy of shallow neural networks
- Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
- Embeddings between Barron spaces with higher-order activation functions
- Approximation of functions from Korobov spaces by shallow neural networks
- How can deep neural networks fail even with global optima?
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Sampling complexity of deep approximation spaces
- A Quantitative Functional Central Limit Theorem for Shallow Neural Networks
- Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
- Gauss Newton method for solving variational problems of PDEs with neural network discretizaitons
- Infinitely many coexisting attractors and scrolls in a fractional-order discrete neuron map
- Approximation properties of deep ReLU CNNs
- A Regularity Theory for Static Schrödinger Equations on \(\boldsymbol{\mathbb{R}}\)d in Spectral Barron Spaces
- Approximation bounds for convolutional neural networks in operator learning
- A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
- Rates of approximation by ReLU shallow neural networks
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
This page was built for publication: High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2118396)