The Barron space and the flow-induced function spaces for neural network models
From MaRDI portal
Publication:2117337
DOI10.1007/s00365-021-09549-yzbMath1490.65020arXiv1906.08039OpenAlexW3165099133MaRDI QIDQ2117337
Publication date: 21 March 2022
Published in: Constructive Approximation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1906.08039
Artificial neural networks and deep learning (68T07) Algorithms for approximation of functions (65D15)
Related Items
Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions ⋮ Two-Layer Neural Networks with Values in a Banach Space ⋮ Nonconvex regularization for sparse neural networks ⋮ Deep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre Extension ⋮ The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation ⋮ Simultaneous neural network approximation for smooth functions ⋮ Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems ⋮ Finite difference schemes for time-space fractional diffusion equations in one- and two-dimensions ⋮ Deep learning methods for partial differential equations and related parameter identification problems ⋮ A class of dimension-free metrics for the convergence of empirical measures ⋮ A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems ⋮ A finite difference scheme for the two-dimensional Gray-Scott equation with fractional Laplacian ⋮ Active learning based sampling for high-dimensional nonlinear partial differential equations ⋮ A two-branch symmetric domain adaptation neural network based on Ulam stability theory ⋮ Improved Analysis of PINNs: Alleviate the CoD for Compositional Solutions ⋮ Learning High-Dimensional McKean–Vlasov Forward-Backward Stochastic Differential Equations with General Distribution Dependence ⋮ Control of neural transport for normalising flows ⋮ Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation ⋮ Causal inference of general treatment effects using neural networks with a diverging number of confounders ⋮ A Reduced Order Schwarz Method for Nonlinear Multiscale Elliptic Equations Based on Two-Layer Neural Networks ⋮ Low-rank kernel approximation of Lyapunov functions using neural networks ⋮ Greedy training algorithms for neural networks and applications to PDEs ⋮ Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning ⋮ Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the tractability of multivariate integration and approximation by neural networks
- Approximation and estimation bounds for artificial neural networks
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- A priori estimates of the population risk for two-layer neural networks
- Universal approximation bounds for superpositions of a sigmoidal function
- Bounds on rates of variable-basis and neural-network approximation
- 10.1162/153244303321897690
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Understanding Machine Learning
- Theory of Reproducing Kernels