A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
From MaRDI portal
Publication:5223573
DOI10.1002/MMA.5575zbMATH Open1416.41018OpenAlexW2924785619WikidataQ128195148 ScholiaQ128195148MaRDI QIDQ5223573FDOQ5223573
Publication date: 18 July 2019
Published in: Mathematical Methods in the Applied Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/mma.5575
Recommendations
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU networks
- Nonlinear approximation and (deep) ReLU networks
- New error bounds for deep ReLU networks using sparse grids
- Provable approximation properties for deep neural networks
Cited In (10)
- A note on the applications of one primary function in deep neural networks
- Deep Network Approximation for Smooth Functions
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Neural network approximation: three hidden layers are enough
- High-dimensional approximate r-nets
- On mathematical modeling in image reconstruction and beyond
- Optimal approximation rate of ReLU networks in terms of width and depth
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- Linearized two-layers neural networks in high dimension
- The construction and approximation of ReLU neural network operators
This page was built for publication: A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223573)