On the approximation by single hidden layer feedforward neural networks with fixed weights
From MaRDI portal
Publication:2179313
DOI10.1016/j.neunet.2017.12.007zbMath1437.68062arXiv1708.06219OpenAlexW2747971139WikidataQ47568517 ScholiaQ47568517MaRDI QIDQ2179313
Vugar E. Ismailov, Namig J. Guliyev
Publication date: 12 May 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1708.06219
Approximation by other special function classes (41A30) Biologically inspired models of computation (DNA computing, membrane computing, etc.) (68Q07)
Related Items (26)
SelectNet: self-paced learning for high-dimensional partial differential equations ⋮ Constructive Approximation of Continuous Interval-Valued Functions ⋮ Full error analysis for the training of deep neural networks ⋮ An interval uncertainty analysis method for structural response bounds using feedforward neural network differentiation ⋮ Asymptotic expansions and Voronovskaja type theorems for the multivariate neural network operators ⋮ Robust min-max optimal control design for systems with uncertain models: a neural dynamic programming approach ⋮ On the approximation of functions by tanh neural networks ⋮ The generalized extreme learning machines: tuning hyperparameters and limiting approach for the Moore-Penrose generalized inverse ⋮ Approximation capabilities of neural networks on unbounded domains ⋮ Some aspects of approximation and interpolation of functions artificial neural networks ⋮ Some elliptic second order problems and neural network solutions: existence and error estimates ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ On decision regions of narrow deep neural networks ⋮ Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation ⋮ Neural network interpolation operators of multivariate functions ⋮ Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality ⋮ Fractional type multivariate neural network operators ⋮ Approximation error of single hidden layer neural networks with fixed weights ⋮ Computing the Approximation Error for Neural Networks with Weights Varying on Fixed Directions ⋮ Negative results for approximation using single layer and multilayer feedforward neural networks ⋮ Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients ⋮ Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation ⋮ Two-hidden-layer feed-forward networks are universal approximators: a constructive approach ⋮ On the representation by bivariate ridge functions ⋮ Rates of approximation by neural network interpolation operators ⋮ The construction and approximation of ReLU neural network operators
Cites Work
- Unnamed Item
- Unnamed Item
- Approximation by neural networks with scattered data
- Approximation by max-product neural network operators of Kantorovich type
- Approximation by neural networks with weights varying on a finite set of directions
- The approximation operators with sigmoidal functions
- Approximation by Ridge functions and neural networks with one hidden layer
- Approximation by superposition of sigmoidal and radial basis functions
- Lower bounds for approximation by MLP neural networks
- Fundamentality of ridge functions
- Neural network operators: constructive interpolation of multivariate functions
- An approximation by neural networks with a fixed weight
- A neural network model with bounded-weights for pattern classification
- Limitations of the approximation capabilities of neural networks with one hidden layer
- On the approximation of the step function by some sigmoid functions
- Limitations of shallow nets approximation
- Pointwise and uniform approximation by multivariate neural network operators of the max-product type
- Max-product neural network and quasi-interpolation operators activated by sigmoidal functions
- Recounting the Rationals
- Deep vs. shallow networks: An approximation theory perspective
- Convergence for a family of neural network operators in Orlicz spaces
- Approximation by ridge functions and neural networks with a bounded number of neurons
- Ridge Functions
- Measure Theoretic Results for Approximation by Neural Networks with Limited Weights
- Approximation by sums of ridge functions with fixed directions
- A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function
- Constructive Approximation by Superposition of Sigmoidal Functions
- Approximation by superpositions of a sigmoidal function
This page was built for publication: On the approximation by single hidden layer feedforward neural networks with fixed weights