Uniform approximation by neural networks
From MaRDI portal
Publication:1273409
DOI10.1006/jath.1997.3217zbMath0932.41016OpenAlexW2026307262MaRDI QIDQ1273409
Publication date: 8 March 2000
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/jath.1997.3217
Abstract approximation theory (approximation in normed linear spaces and other abstract spaces) (41A65) Approximation by other special function classes (41A30) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) Research exposition (monographs, survey articles) pertaining to approximations and expansions (41-02)
Related Items (50)
On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights ⋮ Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces ⋮ Approximation by max-product neural network operators of Kantorovich type ⋮ Approximation by ridge function fields over compact sets ⋮ Approximation by network operators with logistic activation functions ⋮ Max-product neural network and quasi-interpolation operators activated by sigmoidal functions ⋮ Multivariate Jackson-type inequality for a new type neural network approximation ⋮ The estimate for approximation error of spherical neural networks ⋮ A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves ⋮ The errors of approximation for feedforward neural networks in thelpmetric ⋮ Uniform approximation rates and metric entropy of shallow neural networks ⋮ Neural network interpolation operators activated by smooth ramp functions ⋮ Neural network operators: constructive interpolation of multivariate functions ⋮ On the tractability of multivariate integration and approximation by neural networks ⋮ Approximation by neural networks with sigmoidal functions ⋮ Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions ⋮ Saturation classes for MAX-product neural network operators activated by sigmoidal functions ⋮ Modified neural network operators and their convergence properties with summability methods ⋮ Pointwise and uniform approximation by multivariate neural network operators of the max-product type ⋮ Construction and approximation for a class of feedforward neural networks with sigmoidal function ⋮ Approximation by neural networks with a bounded number of nodes at each level ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ Characterization of the variation spaces corresponding to shallow neural networks ⋮ A comparison between fixed-basis and variable-basis schemes for function approximation and functional optimization ⋮ Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting ⋮ Neural network approximation and estimation of classifiers with classification boundary in a Barron class ⋮ The construction and approximation of feedforward neural network with hyperbolic tangent function ⋮ Convergence for a family of neural network operators in Orlicz spaces ⋮ Essential rate for approximation by spherical neural networks ⋮ The errors of simultaneous approximation of multivariate functions by neural networks ⋮ Approximation results for neural network operators activated by sigmoidal functions ⋮ Multivariate neural network operators with sigmoidal activation functions ⋮ Minimization of Error Functionals over Perceptron Networks ⋮ Approximation by neural networks with weights varying on a finite set of directions ⋮ Unnamed Item ⋮ Interpolation by neural network operators activated by ramp functions ⋮ Convergence of a family of neural network operators of the Kantorovich type ⋮ Multivariate sigmoidal neural network approximation ⋮ The approximation operators with sigmoidal functions ⋮ Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation ⋮ Optimization based on quasi-Monte Carlo sampling to design state estimators for non-linear systems ⋮ Piecewise convexity of artificial neural networks ⋮ Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions ⋮ Quantitative approximation by perturbed Kantorovich-Choquet neural network operators ⋮ Rates of approximation by neural network interpolation operators ⋮ On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks ⋮ Approximation by perturbed neural network operators ⋮ Rates of minimization of error functionals over Boolean variable-basis functions ⋮ Approximations by multivariate perturbed neural network operators ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On trigonometric n-widths and their generalization
- Nonlinear approximation by trigonometric sums
- Random approximants and neural networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Decomposition theorems and approximation by a “floating" system of exponentials
- Sup-norm approximation bounds for networks through probabilistic methods
- Lower Bounds for Approximation by Nonlinear Manifolds
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
This page was built for publication: Uniform approximation by neural networks