Universal approximation bounds for superpositions of a sigmoidal function
From MaRDI portal
Publication:4277151
DOI10.1109/18.256500zbMATH Open0818.68126OpenAlexW2166116275MaRDI QIDQ4277151FDOQ4277151
Authors: Andrew R. Barron
Publication date: 7 February 1994
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/04113e8974341f97258800126d05fd8df2751b7e
Recommendations
- Approximation by superpositions of a sigmoidal function
- Approximation and estimation bounds for artificial neural networks
- Estimation of approximating rate for neural network in \(L^p_w\) spaces
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers
Cited In (only showing first 100 items - show all)
- Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models
- Neural network-based variational methods for solving quadratic porous medium equations in high dimensions
- Correlations of random classifiers on large data sets
- Standard representation and unified stability analysis for dynamic artificial neural network models
- A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
- Generalization bounds for sparse random feature expansions
- Understanding neural networks with reproducing kernel Banach spaces
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing
- Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
- Representation formulas and pointwise properties for Barron functions
- Depth separations in neural networks: what is actually being separated?
- The Barron space and the flow-induced function spaces for neural network models
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- Nonconvex regularization for sparse neural networks
- Equivalence of approximation by convolutional neural networks and fully-connected networks
- The Random Feature Model for Input-Output Maps between Banach Spaces
- Learning nonlinear state-space models using autoencoders
- Learning the mapping \(\mathbf{x}\mapsto \sum\limits_{i=1}^d x_i^2\): the cost of finding the needle in a haystack
- Title not available (Why is that?)
- Deep Network Approximation for Smooth Functions
- Effect of depth and width on local minima in deep learning
- Structure probing neural network deflation
- Generalization Error Analysis of Neural Networks with Gradient Based Regularization
- Estimation of a regression function on a manifold by fully connected deep neural networks
- SelectNet: self-paced learning for high-dimensional partial differential equations
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error estimates for deep learning methods in fluid dynamics
- Neural network interpolation operators activated by smooth ramp functions
- A deep Fourier residual method for solving PDEs using neural networks
- Fractional type multivariate neural network operators
- An Augmented Lagrangian Deep Learning Method for Variational Problems with Essential Boundary Conditions
- Cornell potential: a neural network approach
- Theory of deep convolutional neural networks. III: Approximating radial functions
- Approximation by finite mixtures of continuous density functions that vanish at infinity
- Approximation Analysis of Convolutional Neural Networks
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
- Machine learning based data retrieval for inverse scattering problems with incomplete data
- Neural network approximation: three hidden layers are enough
- Approximating functions with multi-features by deep convolutional neural networks
- Approximation of nonlinear functionals using deep ReLU networks
- Approximation of smooth functionals using deep ReLU networks
- Convergence of Physics-Informed Neural Networks Applied to Linear Second-Order Elliptic Interface Problems
- Theory of deep convolutional neural networks: downsampling
- On the rate of convergence of fully connected deep neural network regression estimates
- Title not available (Why is that?)
- Transport analysis of infinitely deep neural network
- Finite neuron method and convergence analysis
- Theory of deep convolutional neural networks. II: Spherical analysis
- High-dimensional distribution generation through deep neural networks
- A theoretical analysis of deep neural networks and parametric PDEs
- A neural network based shock detection and localization approach for discontinuous Galerkin methods
- Deep network approximation characterized by number of neurons
- Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
- Optimal approximation rate of ReLU networks in terms of width and depth
- Approximation properties of deep ReLU CNNs
- Banach space representer theorems for neural networks and ridge splines
- Applications of topological derivatives and neural networks for inverse problems
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
- Supervised learning from noisy observations: combining machine-learning techniques with data assimilation
- On the approximation of rough functions with deep neural networks
- Learning on dynamic statistical manifolds
- Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems
- Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
- Characterization of the variation spaces corresponding to shallow neural networks
- Machine learning for prediction with missing dynamics
- Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
- Int-Deep: a deep learning initialized iterative method for nonlinear problems
- Excitable media store and transfer complicated information via topological defect motion
- A priori and a posteriori error estimates for the deep Ritz method applied to the Laplace and Stokes problem
- Greedy approximation in convex optimization
- Greedy algorithms for prediction
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Efficient sampling in approximate dynamic programming algorithms
- Information Geometry of U-Boost and Bregman Divergence
- MgNet: a unified framework of multigrid and convolutional neural network
- Exponential screening and optimal rates of sparse estimation
- On the curse of dimensionality in the Ritz method
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Approximation order to a function in \(\overline C({\mathbb R})\) by superposition of a sigmoidal function
- A comparison between fixed-basis and variable-basis schemes for function approximation and functional optimization
- Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting)
- Mean-field Langevin dynamics and energy landscape of neural networks
- Functional aggregation for nonparametric regression.
- An exponential inequality under weak dependence
- Another look at statistical learning theory and regularization
- Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy-Krause variation
- Approximation by network operators with logistic activation functions
- High-dimensional change-point estimation: combining filtering with convex optimization
- Moving-horizon state estimation for nonlinear discrete-time systems: new stability results and approximation schemes
- Low-rank kernel approximation of Lyapunov functions using neural networks
- Approximation by superpositions of a sigmoidal function
- Approximation results for neural network operators activated by sigmoidal functions
- Estimation of the binary response model using a mixture of distributions estimator (MOD)
- Approximating networks and extended Ritz method for the solution of functional optimization problems
- Approximation by max-product neural network operators of Kantorovich type
- Trigonometric RBF neural robust controller design for a class of nonlinear system with linear input unmodeled dynamics
- On Kolmogorov's representation of functions of several variables by functions of one variable
- Title not available (Why is that?)
- Adaptive importance sampling for control and inference
- Linear and nonlinear approximation of spherical radial basis function networks
This page was built for publication: Universal approximation bounds for superpositions of a sigmoidal function
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4277151)