Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation
From MaRDI portal
Publication:6062170
DOI10.1016/j.jco.2023.101783OpenAlexW4385311400MaRDI QIDQ6062170
Publication date: 30 November 2023
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2023.101783
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimal global rates of convergence for noiseless regression estimation problems with adaptively chosen design
- On the approximation by neural networks with bounded number of neurons in hidden layers
- On best approximation by ridge functions
- Lower bounds for approximation by MLP neural networks
- Multivariate Jackson-type inequality for a new type neural network approximation
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- Limitations of the approximation capabilities of neural networks with one hidden layer
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Theory of deep convolutional neural networks: downsampling
- Almost optimal estimates for approximation and learning by radial basis function networks
- Limitations of shallow nets approximation
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Approximation by neural networks and learning theory
- Deep vs. shallow networks: An approximation theory perspective
- Learning Theory
- Neural Networks for Localized Approximation
- Deep distributed convolutional neural networks: Universality
- Deep neural networks for rotation-invariance approximation and learning
- Ridge Functions
- A Fast Learning Algorithm for Deep Belief Nets
This page was built for publication: Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation