Approximation by neural networks and learning theory
DOI10.1016/J.JCO.2005.09.001zbMATH Open1156.68541OpenAlexW2044675413MaRDI QIDQ2489152FDOQ2489152
Authors: Vitaly Maiorov
Publication date: 16 May 2006
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2005.09.001
Recommendations
Learning and adaptive systems in artificial intelligence (68T05) Stochastic approximation (62L20) Neural nets and related approaches to inference from stochastic processes (62M45) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Regularization networks and support vector machines
- Ten Lectures on Wavelets
- On the mathematical foundations of learning
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sharper bounds for Gaussian and empirical processes
- A distribution-free theory of nonparametric regression
- Local Rademacher complexities
- The Radon transform
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Neural Network Learning
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Lower bounds for approximation by MLP neural networks
- Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary
- Lower bounds for multivariate approximation by affine-invariant dictionaries
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- Title not available (Why is that?)
- The entropy in learning theory. Error estimates
- Title not available (Why is that?)
- Entropy and the combinatorial dimension
- Necessary and Sufficient Conditions for the Uniform Convergence of Means to their Expectations
- Relaxation in greedy approximation
- On the value of partial information for learning from examples
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the near optimality of the stochastic approximation of smooth functions by neural networks
Cited In (50)
- Learning \(C^2\) and Hölder functions
- Asymptotics of Reinforcement Learning with Neural Networks
- Application of adjoint operators to neural learning
- Title not available (Why is that?)
- Geometric Rates of Approximation by Neural Networks
- Title not available (Why is that?)
- Deep Neural Network Approximation Theory
- On approximate learning by multi-layered feedforward circuits
- Approximation spaces of deep neural networks
- Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximation results in Orlicz spaces for sequences of Kantorovich MAX-product neural network operators
- Using Prior Information to Improve the Approximation Performances of Neural Networks
- Almost optimal estimates for approximation and learning by radial basis function networks
- Title not available (Why is that?)
- Convergence for a family of neural network operators in Orlicz spaces
- Approximation by max-product neural network operators of Kantorovich type
- Title not available (Why is that?)
- Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions
- Model reduction by CPOD and Kriging: application to the shape optimization of an intake port
- THE NEWTON NEURAL NET: A NEW APPROXIMATING NETWORK
- Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation
- Pointwise and uniform approximation by multivariate neural network operators of the max-product type
- Learning Theory
- Quantitative estimates involving \(K\)-functionals for neural network-type operators
- Constrained proper orthogonal decomposition based on QR-factorization for aerodynamical shape optimization
- Neural network operators: constructive interpolation of multivariate functions
- Approximation methods for supervised learning
- Learning and approximating piecewise smooth functions by deep sigmoid neural networks
- Interpolation and rates of convergence for a class of neural networks
- Title not available (Why is that?)
- Some problems in the theory of ridge functions
- Approximation of classifiers by deep perceptron networks
- Approximation by sums of ridge functions with fixed directions
- Approximation rates for neural networks with general activation functions
- Sample complexity bounds for the local convergence of least squares approximation
- Approximation by neural networks with weights varying on a finite set of directions
- Local approximation on artificial neural networks
- Neural nets learning as an inverse problem
- Can neural networks extrapolate? Discussion of a theorem by Pedro Domingos
- Max-product neural network and quasi-interpolation operators activated by sigmoidal functions
- Learning sparse and smooth functions by deep sigmoid nets
- Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting
- Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation
- Training neural networks with noisy data as an ill-posed problem
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning theory and approximation by neural networks
This page was built for publication: Approximation by neural networks and learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2489152)