Approximation by neural networks and learning theory
From MaRDI portal
Publication:2489152
DOI10.1016/j.jco.2005.09.001zbMath1156.68541OpenAlexW2044675413MaRDI QIDQ2489152
Publication date: 16 May 2006
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2005.09.001
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) Stochastic approximation (62L20) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (20)
Approximation theorems for a family of multivariate neural network operators in Orlicz-type spaces ⋮ Approximation by max-product neural network operators of Kantorovich type ⋮ Interpolation and rates of convergence for a class of neural networks ⋮ Max-product neural network and quasi-interpolation operators activated by sigmoidal functions ⋮ Neural network operators: constructive interpolation of multivariate functions ⋮ Model reduction by CPOD and Kriging: application to the shape optimization of an intake port ⋮ Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions ⋮ Saturation classes for MAX-product neural network operators activated by sigmoidal functions ⋮ Pointwise and uniform approximation by multivariate neural network operators of the max-product type ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Learning sparse and smooth functions by deep sigmoid nets ⋮ Convergence results for a family of Kantorovich max-product neural network operators in a multivariate setting ⋮ Convergence for a family of neural network operators in Orlicz spaces ⋮ Constrained proper orthogonal decomposition based on QR-factorization for aerodynamical shape optimization ⋮ Approximation by sums of ridge functions with fixed directions ⋮ Approximation by neural networks with weights varying on a finite set of directions ⋮ Approximation results in Orlicz spaces for sequences of Kantorovich MAX-product neural network operators ⋮ Almost optimal estimates for approximation and learning by radial basis function networks ⋮ Quantitative estimates involving K-functionals for neural network-type operators ⋮ Some problems in the theory of ridge functions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The entropy in learning theory. Error estimates
- Relaxation in greedy approximation
- The Radon transform
- Lower bounds for approximation by MLP neural networks
- Sharper bounds for Gaussian and empirical processes
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- On the value of partial information for learning from examples
- Entropy and the combinatorial dimension
- A distribution-free theory of nonparametric regression
- Regularization networks and support vector machines
- On the near optimality of the stochastic approximation of smooth functions by neural networks
- Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary
- The sizes of compact subsets of Hilbert space and continuity of Gaussian processes
- Local Rademacher complexities
- On the mathematical foundations of learning
- Necessary and Sufficient Conditions for the Uniform Convergence of Means to their Expectations
- Ten Lectures on Wavelets
- Lower bounds for multivariate approximation by affine-invariant dictionaries
- Neural Network Learning
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
This page was built for publication: Approximation by neural networks and learning theory