Universal approximation bounds for superpositions of a sigmoidal function
From MaRDI portal
Publication:4277151
DOI10.1109/18.256500zbMATH Open0818.68126OpenAlexW2166116275MaRDI QIDQ4277151FDOQ4277151
Authors: Andrew R. Barron
Publication date: 7 February 1994
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/04113e8974341f97258800126d05fd8df2751b7e
Recommendations
- Approximation by superpositions of a sigmoidal function
- Approximation and estimation bounds for artificial neural networks
- Estimation of approximating rate for neural network in \(L^p_w\) spaces
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers
Cited In (only showing first 100 items - show all)
- A class \(+1\) sigmoidal activation functions for FFANNs
- Optimal deep neural networks by maximization of the approximation power
- Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems
- Restricted polynomial regression
- Robust control of dynamical systems using neural networks with input–output feedback linearization
- Approximate models for nonlinear dynamical systems and their generalization properties
- Predictive neuro-control of uncertain systems: Design and use of a neuro-optimizer
- On universal estimators in learning theory
- Some extensions of radial basis functions and their applications in artificial intelligence
- Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces
- On the achievability of blind source separation for high-dimensional nonlinear source mixtures
- Solving numerically nonlinear systems of balance laws by multivariate sigmoidal functions approximation
- Convex polynomial and ridge approximation of Lipschitz functions in \(\mathbb R^d\)
- Almost optimal estimates for approximation and learning by radial basis function networks
- Selection dynamics for deep neural networks
- Convergence for a family of neural network operators in Orlicz spaces
- A note on error bounds for function approximation using nonlinear networks
- The complexity of model classes, and smoothing noisy data
- High-dimensional dynamics of generalization error in neural networks
- Neural network interpolation operators optimized by Lagrange polynomial
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Extension of localised approximation by neural networks
- Adaptive control of nonlinear dynamic systems using \(\theta\)-adaptive neural networks
- Convergence of a least-squares Monte Carlo algorithm for American option pricing with dependent sample data
- Approximation properties of local bases assembled from neural network transfer functions
- Annealing stochastic approximation Monte Carlo algorithm for neural network training
- A note on error bounds for approximation in inner product spaces
- Stable adaptive neuro-control design via Lyapunov function derivative estimation
- A note on universal approximation by hierarchical fuzzy systems
- Pointwise and uniform approximation by multivariate neural network operators of the max-product type
- Convergence analysis of convex incremental neural networks
- Global Mittag-Leffler stability of complex valued fractional-order neural network with discrete and distributed delays
- Fusion methods for multiple sensor systems with unknown error densities
- Stochastically ordered multiple regression
- Title not available (Why is that?)
- Approximation-based fixed-time adaptive tracking control for a class of uncertain nonlinear pure-feedback systems
- Neural network approximation
- Complexity of gene circuits, Pfaffian functions and the morphogenesis problem.
- Neural networks and nonlinear statistical methods: An application to the modelling of price-quality relationships
- Models of knowing and the investigation of dynamical systems
- Title not available (Why is that?)
- Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting
- Minimization of Error Functionals over Perceptron Networks
- Approximation by ridge function fields over compact sets
- Solving Fredholm integral equations using deep learning
- Rates of minimization of error functionals over Boolean variable-basis functions
- Gabor neural networks with proven approximation properties
- Algorithms and complexity in biological pattern formation problems
- Nonlinear stable adaptive control based upon Elman networks
- Semi-nonparametric approximation and index options
- Modelling the dynamics of nonlinear time series using canonical variate analysis
- Optimal approximation with sparsely connected deep neural networks
- A methodology for the constructive approximation of nonlinear operators defined on noncompact sets
- Non intrusive reduced order modeling of parametrized PDEs by kernel POD and neural networks
- Approximation with random bases: pro et contra
- Insights into randomized algorithms for neural networks: practical issues and common pitfalls
- Universality of deep convolutional neural networks
- A machine learning framework for data driven acceleration of computations of differential equations
- Title not available (Why is that?)
- Approximation of discontinuous signals by sampling Kantorovich series
- Machine learning from a continuous viewpoint. I
- Geometric Rates of Approximation by Neural Networks
- New insights into Witsenhausen's counterexample
- Instability, complexity, and evolution
- Computing the approximation error for neural networks with weights varying on fixed directions
- Lower estimation of approximation rate for neural networks
- Rates of approximation by ReLU shallow neural networks
- Rates of approximation by neural network interpolation operators
- Approximating and simulating the stochastic growth model: Parameterized expectations, neural networks, and the genetic algorithm
- Adaptive-critic based optimal neuro control synthesis for distributed parameter systems
- Nonlinear dynamical system identification with dynamic noise and observational noise
- Neural networks and seasonality: Some technical considerations
- Nonlinear function approximation: computing smooth solutions with an adaptive greedy algorithm
- Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions
- Deep neural network structures solving variational inequalities
- Squared and absolute errors in optimal approximation of nonlinear systems.
- Learning a function from noisy samples at a finite sparse set of points
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Neural networks and logistic regression. II.
- Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
- Parameter Estimation of Sigmoid Superpositions: Dynamical System Approach
- A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics
- Towards long-term prediction
- Neural network with unbounded activation functions is universal approximator
- Geometrical aspects of discrimination by multilayer perceptrons
- Input variable selection in neural network models
- Neural Networks for Localized Approximation
- On estimation of surrogate models for multivariate computer experiments
- Estimates on compressed neural networks regression
- Local greedy approximation for nonlinear regression and neural network training.
- Stein's identity, Fisher information, and projection pursuit: A triangulation
- A deletion/substitution/addition algorithm for classification neural networks, with applications to biomedical data
- Approximation of fuzzy-valued functions by regular fuzzy neural networks and the accuracy analysis
- Explaining consumer choice through neural networks: the stacked generalization approach
- A General Form for Global Dynamical Data Models for Three-Dimensional Systems
- Book Review: A mathematical introduction to compressive sensing
- Title not available (Why is that?)
- Parameter redundancy in neural networks: an application of Chebyshev polynomials
- Markov chain network training and conservation law approximations: Linking microscopic and macroscopic models for evolution
- Model selection in neural networks: some difficulties
This page was built for publication: Universal approximation bounds for superpositions of a sigmoidal function
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4277151)