Learning sparse and smooth functions by deep sigmoid nets
From MaRDI portal
Publication:6109261
DOI10.1007/s11766-023-4309-4zbMath1524.68320OpenAlexW4381743942MaRDI QIDQ6109261
Publication date: 27 July 2023
Published in: Applied Mathematics. Series B (English Edition) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11766-023-4309-4
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Multivariate Jackson-type inequality for a new type neural network approximation
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Limitations of the approximation capabilities of neural networks with one hidden layer
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Theory of deep convolutional neural networks: downsampling
- Almost optimal estimates for approximation and learning by radial basis function networks
- Limitations of shallow nets approximation
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Approximation by neural networks and learning theory
- Learning rates of least-square regularized regression
- On the mathematical foundations of learning
- Deep vs. shallow networks: An approximation theory perspective
- Learning Deep Architectures for AI
- Learning Theory
- Neural Networks for Localized Approximation
- Neural Network Learning
- Deep neural networks for rotation-invariance approximation and learning
- A Fast Learning Algorithm for Deep Belief Nets
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Learning sparse and smooth functions by deep sigmoid nets