Approximation rates for neural networks with general activation functions
From MaRDI portal
Publication:1982446
DOI10.1016/j.neunet.2020.05.019zbMath1480.41007arXiv1904.02311OpenAlexW3027734040WikidataQ96018643 ScholiaQ96018643MaRDI QIDQ1982446
Jonathan W. Siegel, Jin-Chao Xu
Publication date: 8 September 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.02311
Artificial neural networks and deep learning (68T07) Rate of convergence, degree of approximation (41A25) Approximation by other special function classes (41A30)
Related Items (39)
Stationary Density Estimation of Itô Diffusions Using Deep Learning ⋮ Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions ⋮ Least-squares ReLU neural network (LSNN) method for linear advection-reaction equation ⋮ A mesh-free method using piecewise deep neural network for elliptic interface problems ⋮ Approximation properties of deep ReLU CNNs ⋮ ReLU deep neural networks from the hierarchical basis perspective ⋮ Randomized Newton's method for solving differential equations based on the neural network discretization ⋮ Convergence Rate Analysis for Deep Ritz Method ⋮ HomPINNs: Homotopy physics-informed neural networks for learning multiple solutions of nonlinear elliptic differential equations ⋮ Deep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre Extension ⋮ Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks ⋮ Simultaneous neural network approximation for smooth functions ⋮ Framework for segmented threshold \(\ell_0\) gradient approximation based network for sparse signal recovery ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ On the approximation of functions by tanh neural networks ⋮ Divide-and-conquer DNN approach for the inverse point source problem using a few single frequency measurements ⋮ A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems ⋮ Galerkin neural network approximation of singularly-perturbed elliptic systems ⋮ Characterization of the variation spaces corresponding to shallow neural networks ⋮ Universal regular conditional distributions via probabilistic transformers ⋮ Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ A mathematical perspective of machine learning ⋮ Unnamed Item ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ Neural network approximation and estimation of classifiers with classification boundary in a Barron class ⋮ A priori error estimate of deep mixed residual method for elliptic PDEs ⋮ Convergence Analysis of the Deep Galerkin Method for Weak Solutions ⋮ Designing universal causal deep learning models: The geometric (Hyper)transformer ⋮ Neural network stochastic differential equation models with applications to financial data forecasting ⋮ Greedy training algorithms for neural networks and applications to PDEs ⋮ Finite Neuron Method and Convergence Analysis ⋮ Convergence analysis of neural networks for solving a free boundary problem ⋮ Unnamed Item ⋮ Optimal approximation rate of ReLU networks in terms of width and depth ⋮ Construct Deep Neural Networks based on Direct Sampling Methods for Solving Electrical Impedance Tomography ⋮ Two neural-network-based methods for solving elliptic obstacle problems ⋮ High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Degree of approximation by neural and translation networks with a single hidden layer
- Random approximants and neural networks
- Error bounds for approximations with deep ReLU networks
- A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves
- Approximation by Ridge Functions and Neural Networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Dimension-independent bounds on the degree of approximation by neural networks
- Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives
- Deep distributed convolutional neural networks: Universality
This page was built for publication: Approximation rates for neural networks with general activation functions