Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
DOI10.1137/21M144431XarXiv2103.00542OpenAlexW4385985820MaRDI QIDQ6137593FDOQ6137593
Xiliang Lu, Fengru Wang, Y. M. Lai, Yuanyuan Yang, Jerry Zhijian Yang, Yu Ling Jiao
Publication date: 4 September 2023
Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.00542
Multidimensional problems (41A63) Approximation by other special function classes (41A30) Algorithms for approximation of functions (65D15)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Neural Network Learning
- Title not available (Why is that?)
- Approximation rates for neural networks with general activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- Approximation spaces of deep neural networks
- Provable approximation properties for deep neural networks
- Nonparametric regression using deep neural networks with ReLU activation function
- Exponential convergence of the deep neural network approximation for analytic functions
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Title not available (Why is that?)
- Deep Network Approximation for Smooth Functions
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Deep Network Approximation Characterized by Number of Neurons
- Neural network approximation: three hidden layers are enough
- Exponential ReLU DNN expression of holomorphic maps in high dimension
Cited In (3)
This page was built for publication: Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137593)