Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
From MaRDI portal
Publication:5004339
DOI10.1162/neco_a_01364OpenAlexW3048707547MaRDI QIDQ5004339
Shijun Zhang, Haizhao Yang, Zuowei Shen
Publication date: 30 July 2021
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.12231
Related Items (20)
Stationary Density Estimation of Itô Diffusions Using Deep Learning ⋮ Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions ⋮ SelectNet: self-paced learning for high-dimensional partial differential equations ⋮ High Order Deep Neural Network for Solving High Frequency Partial Differential Equations ⋮ Deep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre Extension ⋮ The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation ⋮ DeepParticle: learning invariant measure by a deep neural network minimizing Wasserstein distance on data generated from an interacting particle method ⋮ Simultaneous neural network approximation for smooth functions ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ Neural network approximation: three hidden layers are enough ⋮ Convergence of deep convolutional neural networks ⋮ On the recovery of internal source for an elliptic system by neural network approximation ⋮ Friedrichs Learning: Weak Solutions of Partial Differential Equations via Deep Learning ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ Deep Neural Networks with ReLU-Sine-Exponential Activations Break Curse of Dimensionality in Approximation on Hölder Class ⋮ A Variational Neural Network Approach for Glacier Modelling with Nonlinear Rheology ⋮ Designing universal causal deep learning models: The geometric (Hyper)transformer ⋮ On mathematical modeling in image reconstruction and beyond ⋮ Optimal approximation rate of ReLU networks in terms of width and depth ⋮ Discontinuous neural networks and discontinuity learning
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimization by Simulated Annealing
- On a constructive proof of Kolmogorov's superposition theorem
- Lower bounds for approximation by MLP neural networks
- Multilayer feedforward networks are universal approximators
- Optimal nonlinear approximation
- Exponential convergence of the deep neural network approximation for analytic functions
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonlinear approximation via compositions
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- A consensus-based model for global optimization and its mean-field limit
- Universal approximation bounds for superpositions of a sigmoidal function
- Optimal Approximation with Sparsely Connected Deep Neural Networks
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Error bounds for approximations with deep ReLU neural networks in Ws,p norms
- Deep Network Approximation Characterized by Number of Neurons
- A note on the expressive power of deep rectified linear unit networks in high‐dimensional spaces
- A Simplex Method for Function Minimization
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth