Exponential convergence of the deep neural network approximation for analytic functions
From MaRDI portal
Publication:1989902
DOI10.1007/s11425-018-9387-xzbMath1475.65007arXiv1807.00297OpenAlexW2963172624WikidataQ115602254 ScholiaQ115602254MaRDI QIDQ1989902
Publication date: 29 October 2018
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1807.00297
Artificial neural networks and deep learning (68T07) Algorithms for approximation of functions (65D15)
Related Items (36)
Structure probing neural network deflation ⋮ Neural network approximation ⋮ Analytic continuation of noisy data using Adams Bashforth residual neural network ⋮ SelectNet: self-paced learning for high-dimensional partial differential equations ⋮ ReLU deep neural networks from the hierarchical basis perspective ⋮ Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions ⋮ Full error analysis for the training of deep neural networks ⋮ Sparse approximation of triangular transports. I: The finite-dimensional case ⋮ A note on the applications of one primary function in deep neural networks ⋮ Nonlinear approximation via compositions ⋮ Simultaneous neural network approximation for smooth functions ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ On the approximation of functions by tanh neural networks ⋮ Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs ⋮ Convergence of deep convolutional neural networks ⋮ Deep ReLU neural networks in high-dimensional approximation ⋮ Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs ⋮ Improved training of physics-informed neural networks for parabolic differential equations with sharply perturbed initial conditions ⋮ On mathematical modeling in image reconstruction and beyond ⋮ Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting) ⋮ Deep ReLU networks and high-order finite element methods ⋮ Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems ⋮ PDE-Net 2.0: learning PDEs from data with a numeric-symbolic hybrid deep network ⋮ Deep Network Approximation for Smooth Functions ⋮ Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units ⋮ Selection dynamics for deep neural networks ⋮ Optimal approximation rate of ReLU networks in terms of width and depth ⋮ Constructive deep ReLU neural network approximation ⋮ MgNet: a unified framework of multigrid and convolutional neural network ⋮ The Gap between Theory and Practice in Function Approximation with Deep Neural Networks ⋮ Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth ⋮ Non-homogeneous Poisson process intensity modeling and estimation using measure transport ⋮ Nonlinear approximation and (deep) ReLU networks ⋮ Exponential ReLU DNN expression of holomorphic maps in high dimension
Cites Work
This page was built for publication: Exponential convergence of the deep neural network approximation for analytic functions