An Integral Upper Bound for Neural Network Approximation
From MaRDI portal
Publication:3399370
DOI10.1162/NECO.2009.04-08-745zbMATH Open1186.68370DBLPjournals/neco/KainenK09OpenAlexW1987961932WikidataQ48503903 ScholiaQ48503903MaRDI QIDQ3399370FDOQ3399370
Authors: Paul C. Kainen, Věra Kůrková
Publication date: 12 October 2009
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: http://www.nusl.cz/ntk/nusl-39633
Recommendations
Cites Work
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Universal approximation bounds for superpositions of a sigmoidal function
- Title not available (Why is that?)
- Title not available (Why is that?)
- Error Estimates for Approximate Optimization by the Extended Ritz Method
- Rates of convex approximation in non-Hilbert spaces
- Complexity of Gaussian-radial-basis networks approximating smooth functions
- Approximation and learning of convex superpositions
- A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves
- Title not available (Why is that?)
Cited In (18)
- A tight upper bound on the generalization error of feedforward neural networks
- Integral combinations of Heavisides
- Continuous limits of residual neural networks in case of large input data
- Title not available (Why is that?)
- The construction and approximation of the neural network with two weights
- Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces
- Convergence for a family of neural network operators in Orlicz spaces
- Approximation by max-product neural network operators of Kantorovich type
- Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation
- Piecewise convexity of artificial neural networks
- On the tractability of multivariate integration and approximation by neural networks
- Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality
- Title not available (Why is that?)
- Complexity estimates based on integral transforms induced by computational units
- Measure Theoretic Results for Approximation by Neural Networks with Limited Weights
- Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- A Framework for the Construction of Upper Bounds on the Number of Affine Linear Regions of ReLU Feed-Forward Neural Networks
This page was built for publication: An Integral Upper Bound for Neural Network Approximation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3399370)