On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks
DOI10.1007/s00025-020-01239-8zbMath1443.62314arXiv1811.05199OpenAlexW3038177324MaRDI QIDQ777322
Publication date: 7 July 2020
Published in: Results in Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.05199
neural networksuniform boundedness principlerates of convergencecounterexamplessharpness of error bounds
Generalized linear models (logistic models) (62J12) Best approximation, Chebyshev systems (41A50) Rate of convergence, degree of approximation (41A25) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (6)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A sharp error estimate for numerical Fourier fransform of band-limited functions based on windowed samples
- Approximation results for neural network operators activated by sigmoidal functions
- Intelligent systems. Approximation by artificial neural networks
- Multivariate hyperbolic tangent neural network approximation
- Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
- The essential order of approximation for neural networks
- A general approach to counterexamples in numerical analysis
- On nonlinear condensation principles with rates
- Quantitative extensions of the uniform boundedness principle
- Uniform approximation by neural networks
- On the degree of approximation by manifolds of finite pseudo-dimension
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- Multilayer feedforward networks are universal approximators
- Approximation of functions of finite variation by superpositions of a sigmoidal function.
- New study on neural networks: the essential order of approximation
- Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
- On the near optimality of the stochastic approximation of smooth functions by neural networks
- A counterexample regarding ``New study on neural networks: the essential order of approximation
- Interpolation by neural network operators activated by ramp functions
- Approximation and estimation bounds for free knot splines
- The essential order of approximation for nearly exponential type neural networks
- Universal Approximation by Ridge Computational Models and Neural Networks: A Survey
- The Lost Cousin of the Fundamental Theorem of Algebra
- On the Sharpness of Estimates in Terms of Averages
- Universal approximation bounds for superpositions of a sigmoidal function
- Efficient estimation of neural weights by polynomial approximation
- A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function
- Constructive Approximation by Superposition of Sigmoidal Functions
- Understanding Machine Learning
- Approximation by superpositions of a sigmoidal function
This page was built for publication: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks