Neural networks with ReLU powers need less depth
From MaRDI portal
Publication:6535869
Recommendations
- Nonlinear approximation and (deep) ReLU networks
- Error bounds for approximations with deep ReLU networks
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- Deep vs. shallow networks: an approximation theory perspective
- SignReLU neural network and its approximation ability
Cites work
- scientific article; zbMATH DE number 477682 (Why is no real title available?)
- Approximation by superpositions of a sigmoidal function
- Approximation spaces of deep neural networks
- Deep learning
- Error bounds for approximations with deep ReLU networks
- Multilayer feedforward networks are universal approximators
- On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations
- Tensor analysis. Spectral theory and special tensors
Cited in
(2)
This page was built for publication: Neural networks with ReLU powers need less depth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6535869)