Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality
DOI10.1016/J.JCO.2023.101746arXiv2103.04488OpenAlexW3133651710MaRDI QIDQ6155895FDOQ6155895
Authors: Philipp Grohs, Shokhrukh Ibragimov, Arnulf Jentzen, Sarah Koppensteiner
Publication date: 7 June 2023
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.04488
Recommendations
- Probabilistic lower bounds for approximation by shallow perceptron networks
- Dimension-independent bounds on the degree of approximation by neural networks
- Lower bounds for approximation by MLP neural networks
- scientific article; zbMATH DE number 1182753
- Approximation and estimation bounds for artificial neural networks
- On the approximation by neural networks with bounded number of neurons in hidden layers
- scientific article; zbMATH DE number 1843047
- Rates of approximation by ReLU shallow neural networks
- Deep vs. shallow networks: an approximation theory perspective
- An Integral Upper Bound for Neural Network Approximation
artificial neural networkscurse of dimensionalitylower boundsartificial neural network approximationsovercoming the curse of dimensionality
Artificial intelligence (68Txx) Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems (65Mxx) Approximations and expansions (41Axx)
Cites Work
- A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
- Universal approximation bounds for superpositions of a sigmoidal function
- Comparison of worst case errors in linear and neural network approximation
- Wahrscheinlichkeitstheorie
- Title not available (Why is that?)
- A Remark on Stirling's Formula
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks
- Approximation by superpositions of a sigmoidal function
- Tractability of multivariate problems. Volume I: Linear information
- Tractability of multivariate problems. Volume II: Standard information for functionals.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Lower bounds for approximation by MLP neural networks
- Rates of convex approximation in non-Hilbert spaces
- Complexity of Gaussian-radial-basis networks approximating smooth functions
- Approximation and learning of convex superpositions
- Geometric Upper Bounds on Rates of Variable-Basis Approximation
- Approximation and estimation bounds for artificial neural networks
- On the approximation by single hidden layer feedforward neural networks with fixed weights
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Optimal approximation with sparsely connected deep neural networks
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- Minimization of Error Functionals over Perceptron Networks
- Deep vs. shallow networks: an approximation theory perspective
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing
- An overview on deep learning-based approximation methods for partial differential equations
- Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
- Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms
- Full error analysis for the training of deep neural networks
- Uniform error estimates for artificial neural network approximations for heat equations
- Space-time error estimates for deep neural network approximations for differential equations
Cited In (3)
This page was built for publication: Lower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionality
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6155895)