Approximation by neural networks and learning theory
From MaRDI portal
Publication:2489152
Recommendations
Cites work
- scientific article; zbMATH DE number 1804108 (Why is no real title available?)
- scientific article; zbMATH DE number 124386 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 1503621 (Why is no real title available?)
- scientific article; zbMATH DE number 1827090 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- Entropy and the combinatorial dimension
- Local Rademacher complexities
- Lower bounds for approximation by MLP neural networks
- Lower bounds for multivariate approximation by affine-invariant dictionaries
- Necessary and Sufficient Conditions for the Uniform Convergence of Means to their Expectations
- Neural Network Learning
- On the mathematical foundations of learning
- On the near optimality of the stochastic approximation of smooth functions by neural networks
- On the value of partial information for learning from examples
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary
- Regularization networks and support vector machines
- Relaxation in greedy approximation
- Sharper bounds for Gaussian and empirical processes
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Ten Lectures on Wavelets
- The Radon transform
- The entropy in learning theory. Error estimates
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
Cited in
(50)- Asymptotics of Reinforcement Learning with Neural Networks
- Application of adjoint operators to neural learning
- Learning \(C^2\) and Hölder functions
- scientific article; zbMATH DE number 1225807 (Why is no real title available?)
- scientific article; zbMATH DE number 2186223 (Why is no real title available?)
- Geometric Rates of Approximation by Neural Networks
- On approximate learning by multi-layered feedforward circuits
- Deep Neural Network Approximation Theory
- Approximation spaces of deep neural networks
- Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces
- Approximation results in Orlicz spaces for sequences of Kantorovich MAX-product neural network operators
- scientific article; zbMATH DE number 5957227 (Why is no real title available?)
- scientific article; zbMATH DE number 1545336 (Why is no real title available?)
- Almost optimal estimates for approximation and learning by radial basis function networks
- Using Prior Information to Improve the Approximation Performances of Neural Networks
- Convergence for a family of neural network operators in Orlicz spaces
- Approximation by max-product neural network operators of Kantorovich type
- scientific article; zbMATH DE number 1405266 (Why is no real title available?)
- scientific article; zbMATH DE number 1083116 (Why is no real title available?)
- Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions
- Model reduction by CPOD and Kriging: application to the shape optimization of an intake port
- THE NEWTON NEURAL NET: A NEW APPROXIMATING NETWORK
- Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation
- Pointwise and uniform approximation by multivariate neural network operators of the max-product type
- Learning Theory
- Quantitative estimates involving \(K\)-functionals for neural network-type operators
- Constrained proper orthogonal decomposition based on QR-factorization for aerodynamical shape optimization
- Neural network operators: constructive interpolation of multivariate functions
- Approximation methods for supervised learning
- Interpolation and rates of convergence for a class of neural networks
- Learning and approximating piecewise smooth functions by deep sigmoid neural networks
- Some problems in the theory of ridge functions
- scientific article; zbMATH DE number 558489 (Why is no real title available?)
- Approximation of classifiers by deep perceptron networks
- Approximation by sums of ridge functions with fixed directions
- Approximation rates for neural networks with general activation functions
- Approximation by neural networks with weights varying on a finite set of directions
- Local approximation on artificial neural networks
- Sample complexity bounds for the local convergence of least squares approximation
- Neural nets learning as an inverse problem
- Can neural networks extrapolate? Discussion of a theorem by Pedro Domingos
- Max-product neural network and quasi-interpolation operators activated by sigmoidal functions
- Learning sparse and smooth functions by deep sigmoid nets
- scientific article; zbMATH DE number 35425 (Why is no real title available?)
- Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting
- Training neural networks with noisy data as an ill-posed problem
- scientific article; zbMATH DE number 926775 (Why is no real title available?)
- Learning theory and approximation by neural networks
This page was built for publication: Approximation by neural networks and learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2489152)