Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
DOI10.1137/21M144431XarXiv2103.00542OpenAlexW4385985820MaRDI QIDQ6137593FDOQ6137593
Authors: Yu Ling Jiao, Y. M. Lai, Xiliang Lu, Fengru Wang, Jerry Zhijian Yang, Yuanyuan Yang
Publication date: 4 September 2023
Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.00542
Recommendations
- Deep ReLU neural networks in high-dimensional approximation
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Neural network approximation: three hidden layers are enough
- Deep ReLU networks and high-order finite element methods
- Optimal approximation rate of ReLU networks in terms of width and depth
Multidimensional problems (41A63) Approximation by other special function classes (41A30) Algorithms for approximation of functions (65D15)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Neural Network Learning
- Title not available (Why is that?)
- Approximation rates for neural networks with general activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Deep network with approximation error being reciprocal of width to power of square root of depth
- Approximation spaces of deep neural networks
- Provable approximation properties for deep neural networks
- Nonparametric regression using deep neural networks with ReLU activation function
- Exponential convergence of the deep neural network approximation for analytic functions
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Error bounds for approximations with deep ReLU neural networks in \(W^{s , p}\) norms
- New error bounds for deep ReLU networks using sparse grids
- Title not available (Why is that?)
- Deep Network Approximation for Smooth Functions
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Deep network approximation characterized by number of neurons
- Neural network approximation: three hidden layers are enough
- Exponential ReLU DNN expression of holomorphic maps in high dimension
Cited In (4)
- Exponential ReLU DNN expression of holomorphic maps in high dimension
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Analysis of the rate of convergence of two regression estimates defined by neural features which are easy to implement
- SignReLU neural network and its approximation ability
This page was built for publication: Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137593)