Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class
From MaRDI portal
Publication:6137593
DOI10.1137/21m144431xarXiv2103.00542OpenAlexW4385985820MaRDI QIDQ6137593
Xiliang Lu, Fengru Wang, Yan-Ming Lai, Yuan-Yuan Yang, Jerry Zhijian Yang, Yu Ling Jiao
Publication date: 4 September 2023
Published in: SIAM Journal on Mathematical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.00542
Multidimensional problems (41A63) Algorithms for approximation of functions (65D15) Approximation by other special function classes (41A30)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multilayer feedforward networks are universal approximators
- Provable approximation properties for deep neural networks
- Approximation rates for neural networks with general activation functions
- Exponential convergence of the deep neural network approximation for analytic functions
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Approximation spaces of deep neural networks
- Exponential ReLU DNN expression of holomorphic maps in high dimension
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonparametric regression using deep neural networks with ReLU activation function
- Error bounds for approximations with deep ReLU networks
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Universal approximation bounds for superpositions of a sigmoidal function
- Neural Network Learning
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Deep Network Approximation for Smooth Functions
- Deep Network Approximation Characterized by Number of Neurons
- A Stochastic Approximation Method
- Approximation by superpositions of a sigmoidal function
- Neural network approximation: three hidden layers are enough
This page was built for publication: Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class