High-order approximation rates for shallow neural networks with cosine and ReLU^k activation functions
DOI10.1016/J.ACHA.2021.12.005zbMATH Open1501.41006arXiv2012.07205OpenAlexW4200096621MaRDI QIDQ2118396FDOQ2118396
Authors: Jonathan W. Siegel, Jinchao Xu
Publication date: 22 March 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.07205
Recommendations
- Uniform approximation rates and metric entropy of shallow neural networks
- Approximation of smoothness classes by deep rectifier networks
- Error bounds for approximations with deep ReLU networks
- Approximation rates for neural networks with general activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
Nontrigonometric harmonic analysis involving wavelets and other special systems (42C40) Numerical interpolation (65D05) Approximation by other special function classes (41A30) Sobolev spaces and other spaces of ``smooth functions, embedding theorems, trace theorems (46E35)
Cites Work
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Universal approximation bounds for superpositions of a sigmoidal function
- Ten Lectures on Wavelets
- Sparse grids
- Convex Analysis
- Random approximants and neural networks
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Spline Functions on Triangulations
- Title not available (Why is that?)
- Approximation results for neural network operators activated by sigmoidal functions
- Approximation by series of sigmoidal functions with applications to neural networks
- On best approximation by ridge functions
- Approximation rates for neural networks with general activation functions
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Title not available (Why is that?)
- Approximation by Ridge Functions and Neural Networks
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Lower bounds of the discretization error for piecewise polynomials
- Bounds on rates of variable-basis and neural-network approximation
- Title not available (Why is that?)
- Title not available (Why is that?)
- Triangular Elements in the Finite Element Method
- 0n the best approximation by ridge functions in the uniform norm
- Interpolation polynomials on the triangle
- Error bounds for approximations with deep ReLU networks
- Breaking the curse of dimensionality with convex neural networks
- Best approximation by ridge functions in \(L_p\)-spaces
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Finite Neuron Method and Convergence Analysis
- Quasiorthogonal dimension of Euclidean spaces
Cited In (17)
- Gauss Newton method for solving variational problems of PDEs with neural network discretizaitons
- Construction and approximation rate for feedforward neural network operators with sigmoidal functions
- A New Function Space from Barron Class and Application to Neural Network Approximation
- A Regularity Theory for Static Schrödinger Equations on \(\boldsymbol{\mathbb{R}}\)d in Spectral Barron Spaces
- A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks
- Uniform approximation rates and metric entropy of shallow neural networks
- A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations
- Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks
- How can deep neural networks fail even with global optima?
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Sampling complexity of deep approximation spaces
- Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
- Approximation properties of deep ReLU CNNs
- Approximation bounds for convolutional neural networks in operator learning
- Infinitely many coexisting attractors and scrolls in a fractional-order discrete neuron map
This page was built for publication: High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2118396)