Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
From MaRDI portal
Publication:2055036
DOI10.1016/j.neunet.2019.12.013OpenAlexW3031010353WikidataQ96028494 ScholiaQ96028494MaRDI QIDQ2055036
Hadrien Montanelli, Haizhao Yang
Publication date: 3 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.11945
approximation theorycurse of dimensionalitydeep ReLU networksKolmogorov-Arnold superposition theorem
Artificial neural networks and deep learning (68T07) Multidimensional problems (41A63) Approximation by other special function classes (41A30) Representation and superposition of functions (26B40)
Related Items (23)
Structure probing neural network deflation ⋮ Stationary Density Estimation of Itô Diffusions Using Deep Learning ⋮ Machine learning for prediction with missing dynamics ⋮ SelectNet: self-paced learning for high-dimensional partial differential equations ⋮ Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions ⋮ The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation ⋮ Simultaneous neural network approximation for smooth functions ⋮ Neural network approximation: three hidden layers are enough ⋮ On the approximation of functions by tanh neural networks ⋮ A note on computing with Kolmogorov superpositions without iterations ⋮ Convergence of deep convolutional neural networks ⋮ The Kolmogorov-Arnold representation theorem revisited ⋮ Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ Deep nonparametric estimation of intrinsic data structures by chart autoencoders: generalization error and robustness ⋮ Approximation of compositional functions with ReLU neural networks ⋮ Growing axons: greedy learning of neural networks with application to function approximation ⋮ A three layer neural network can represent any multivariate function ⋮ Convergence rate of DeepONets for learning operators arising from advection-diffusion equations ⋮ ExSpliNet: An interpretable and expressive spline-based neural network ⋮ Deep Network Approximation for Smooth Functions ⋮ Deep Network Approximation Characterized by Number of Neurons ⋮ Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
Cites Work
- On a constructive proof of Kolmogorov's superposition theorem
- Multilayer feedforward networks are universal approximators
- Provable approximation properties for deep neural networks
- Optimal nonlinear approximation
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- An improvement in the superposition theorem of Kolmogorov
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Metric Entropy, Widths, and Superpositions of Functions
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Numerical Methods for Special Functions
- On the Structure of Continuous Functions of Several Variables
- Approximation by superpositions of a sigmoidal function
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem