Neural networks with ReLU powers need less depth
From MaRDI portal
Publication:6535869
DOI10.1016/J.NEUNET.2023.12.027MaRDI QIDQ6535869FDOQ6535869
Authors: Kurt Izak M. Cabanilla, Rhudaina Z. Mohammad, Jose Ernie C. Lope
Publication date: 5 March 2024
Published in: Neural Networks (Search for Journal in Brave)
Recommendations
- Nonlinear approximation and (deep) ReLU networks
- Error bounds for approximations with deep ReLU networks
- Better approximations of high dimensional smooth functions by deep neural networks with rectified power units
- Deep vs. shallow networks: an approximation theory perspective
- SignReLU neural network and its approximation ability
Cites Work
- Deep learning
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors
- Tensor analysis. Spectral theory and special tensors
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Approximation spaces of deep neural networks
- Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations
Cited In (2)
This page was built for publication: Neural networks with ReLU powers need less depth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6535869)