Deep neural networks for rotation-invariance approximation and learning
From MaRDI portal
Publication:5236745
DOI10.1142/S0219530519400074zbMath1423.68378arXiv1904.01814OpenAlexW2965767282MaRDI QIDQ5236745
Ding-Xuan Zhou, Shao-Bo Lin, Charles K. Chui
Publication date: 10 October 2019
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.01814
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Related Items (15)
Modified proximal symmetric ADMMs for multi-block separable convex optimization with linear constraints ⋮ Approximations with deep neural networks in Sobolev time-space ⋮ A deep network construction that adapts to intrinsic dimensionality beyond the domain ⋮ Theory of deep convolutional neural networks. III: Approximating radial functions ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Learning sparse and smooth functions by deep sigmoid nets ⋮ SignReLU neural network and its approximation ability ⋮ Deep learning theory of distribution regression with CNNs ⋮ Deep ReLU networks and high-order finite element methods ⋮ Error bounds for approximations with deep ReLU neural networks in Ws,p norms ⋮ Unnamed Item ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ Balanced joint maximum mean discrepancy for deep transfer learning ⋮ Approximation of functions from korobov spaces by deep convolutional neural networks ⋮ Spline representation and redundancies of one-dimensional ReLU neural network models
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unregularized online learning algorithms with general loss functions
- Approximation by polynomials and ridge functions of classes of \(s\)-monotone radial functions
- Learning and approximation by Gaussians on Riemannian manifolds
- On the degree of approximation by manifolds of finite pseudo-dimension
- Lower bounds for approximation by MLP neural networks
- Why does deep and cheap learning work so well?
- Provable approximation properties for deep neural networks
- Distributed kernel-based gradient descent algorithms
- A distribution-free theory of nonparametric regression
- The covering number in learning theory
- When is approximation by Gaussian networks necessarily a linear process?
- Approximation properties of a multilayered feedforward artificial neural network
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Approximation of Sobolev classes by polynomials and ridge functions
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- Deep vs. shallow networks: An approximation theory perspective
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- Neural Networks for Localized Approximation
- Radial Basis Functions
- Deep distributed convolutional neural networks: Universality
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Nonparametric Regression Based on Hierarchical Interaction Models
- A Fast Learning Algorithm for Deep Belief Nets
This page was built for publication: Deep neural networks for rotation-invariance approximation and learning