A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
From MaRDI portal
Publication:5223573
DOI10.1002/MMA.5575zbMATH Open1416.41018OpenAlexW2924785619WikidataQ128195148 ScholiaQ128195148MaRDI QIDQ5223573FDOQ5223573
Authors: Liang Chen, Congwei Wu
Publication date: 18 July 2019
Published in: Mathematical Methods in the Applied Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/mma.5575
Recommendations
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU networks
- Nonlinear approximation and (deep) ReLU networks
- New error bounds for deep ReLU networks using sparse grids
- Provable approximation properties for deep neural networks
Cited In (24)
- A multivariate Riesz basis of ReLU neural networks
- A note on the applications of one primary function in deep neural networks
- Deep Network Approximation for Smooth Functions
- Provable approximation properties for deep neural networks
- Deep ReLU neural networks in high-dimensional approximation
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Breaking the curse of dimensionality with convex neural networks
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Neural network approximation and estimation of classifiers with classification boundary in a Barron class
- Neural network approximation: three hidden layers are enough
- High-dimensional approximate r-nets
- Nonlinear approximation and (deep) ReLU networks
- Tractability of approximation by general shallow networks
- On mathematical modeling in image reconstruction and beyond
- New error bounds for deep ReLU networks using sparse grids
- High-dimensional distribution generation through deep neural networks
- PowerNet: efficient representations of polynomials and smooth functions by deep neural networks with rectified power units
- Any target function exists in a neighborhood of any sufficiently wide random network: a geometrical perspective
- Optimal approximation rate of ReLU networks in terms of width and depth
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- Linearized two-layers neural networks in high dimension
- The construction and approximation of ReLU neural network operators
This page was built for publication: A note on the expressive power of deep rectified linear unit networks in high-dimensional spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5223573)