On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks
DOI10.1007/S00025-020-01239-8zbMATH Open1443.62314arXiv1811.05199OpenAlexW3038177324MaRDI QIDQ777322FDOQ777322
Publication date: 7 July 2020
Published in: Results in Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.05199
neural networksrates of convergencecounterexamplessharpness of error boundsuniform boundedness principle
Generalized linear models (logistic models) (62J12) Neural nets and related approaches to inference from stochastic processes (62M45) Best approximation, Chebyshev systems (41A50) Rate of convergence, degree of approximation (41A25)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Universal approximation bounds for superpositions of a sigmoidal function
- Understanding Machine Learning
- On nonlinear condensation principles with rates
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Approximation results for neural network operators activated by sigmoidal functions
- Constructive Approximation by Superposition of Sigmoidal Functions
- Uniform approximation by neural networks
- Intelligent systems. Approximation by artificial neural networks
- On the degree of approximation by manifolds of finite pseudo-dimension
- Multivariate hyperbolic tangent neural network approximation
- Quantitative extensions of the uniform boundedness principle
- Approximation of functions of finite variation by superpositions of a sigmoidal function.
- A sharp error estimate for numerical Fourier fransform of band-limited functions based on windowed samples
- The essential order of approximation for neural networks
- Bounding the Vapnik-Chervonenkis dimension of concept classes parameterized by real numbers
- Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
- Interpolation by neural network operators activated by ramp functions
- A general approach to counterexamples in numerical analysis
- New study on neural networks: the essential order of approximation
- Universal Approximation by Ridge Computational Models and Neural Networks: A Survey
- The essential order of approximation for nearly exponential type neural networks
- The Lost Cousin of the Fundamental Theorem of Algebra
- Approximation and estimation bounds for free knot splines
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- On the near optimality of the stochastic approximation of smooth functions by neural networks
- A counterexample regarding ``New study on neural networks: the essential order of approximation
- On the Sharpness of Estimates in Terms of Averages
- Efficient estimation of neural weights by polynomial approximation
- A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function
Cited In (8)
- On sharpness of error bounds for multivariate neural network approximation
- On the approximation by single hidden layer feedforward neural networks with fixed weights
- Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation
- Approximation error for neural network operators by an averaged modulus of smoothness
- On sharpness of an error bound for deep ReLU network approximation
- Asymptotic analysis of neural network operators employing the Hardy-Littlewood maximal inequality
- DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS
- Quantitative estimates for neural network operators implied by the asymptotic behaviour of the sigmoidal activation functions
This page was built for publication: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q777322)