Deep Network Approximation Characterized by Number of Neurons

From MaRDI portal
Publication:5162359

DOI10.4208/cicp.OA-2020-0149OpenAlexW3101996726MaRDI QIDQ5162359

Haizhao Yang, Zuowei Shen, Shijun Zhang

Publication date: 2 November 2021

Published in: Communications in Computational Physics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1906.05497




Related Items (30)

Discovery of subdiffusion problem with noisy data via deep learningAn Augmented Lagrangian Deep Learning Method for Variational Problems with Essential Boundary ConditionsA Deep Learning Method for Elliptic Hemivariational InequalitiesFull error analysis for the training of deep neural networksThe Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error EstimationApproximation bounds for norm constrained neural networks with applications to regression and GANsSimultaneous neural network approximation for smooth functionsDeep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional ProblemsNeural network approximation: three hidden layers are enoughOn the capacity of deep generative networks for approximating distributionsConvergence of deep convolutional neural networksA Deep Generative Approach to Conditional SamplingThe Kolmogorov-Arnold representation theorem revisitedApproximation Analysis of Convolutional Neural NetworksActive learning based sampling for high-dimensional nonlinear partial differential equationsSignReLU neural network and its approximation abilityDeep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder ClassCollocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputsDeep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactorsDeep learning via dynamical systems: an approximation perspectiveApproximation in shift-invariant spaces with deep ReLU neural networksSimultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activationsDeep Network Approximation for Smooth FunctionsOptimal approximation rate of ReLU networks in terms of width and depthStochastic Markov gradient descent and training low-bit neural networksThe Gap between Theory and Practice in Function Approximation with Deep Neural NetworksDeep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of DepthSpline representation and redundancies of one-dimensional ReLU neural network modelsNonlinear approximation and (deep) ReLU networksA New Function Space from Barron Class and Application to Neural Network Approximation


Uses Software


Cites Work


This page was built for publication: Deep Network Approximation Characterized by Number of Neurons