A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
From MaRDI portal
Publication:5223573
Recommendations
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU networks
- Nonlinear approximation and (deep) ReLU networks
- New error bounds for deep ReLU networks using sparse grids
- Provable approximation properties for deep neural networks
Cited in
(24)- Linearized two-layers neural networks in high dimension
- Breaking the curse of dimensionality with convex neural networks
- High-dimensional distribution generation through deep neural networks
- Tractability of approximation by general shallow networks
- PowerNet: efficient representations of polynomials and smooth functions by deep neural networks with rectified power units
- Any target function exists in a neighborhood of any sufficiently wide random network: a geometrical perspective
- A multivariate Riesz basis of ReLU neural networks
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
- Neural network approximation and estimation of classifiers with classification boundary in a Barron class
- Deep ReLU neural networks in high-dimensional approximation
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- New error bounds for deep ReLU networks using sparse grids
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- A note on the applications of one primary function in deep neural networks
- Deep Network Approximation for Smooth Functions
- On mathematical modeling in image reconstruction and beyond
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Provable approximation properties for deep neural networks
- Neural network approximation: three hidden layers are enough
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Nonlinear approximation and (deep) ReLU networks
- Optimal approximation rate of ReLU networks in terms of width and depth
- High-dimensional approximate r-nets
- The construction and approximation of ReLU neural network operators
This page was built for publication: A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223573)