The Barron space and the flow-induced function spaces for neural network models
From MaRDI portal
Publication:2117337
Abstract: One of the key issues in the analysis of machine learning models is to identify the appropriate function space and norm for the model. This is the set of functions endowed with a quantity which can control the approximation and estimation errors by a particular machine learning model. In this paper, we address this issue for two representative neural network models: the two-layer networks and the residual neural networks. We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space. For residual neural network models, we construct the so-called flow-induced function space, and prove direct and inverse approximation theorems for this space. In addition, we show that the Rademacher complexity for bounded sets under these norms has the optimal upper bounds.
Recommendations
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Understanding neural networks with reproducing kernel Banach spaces
- Representation formulas and pointwise properties for Barron functions
- Approximation spaces of deep neural networks
- Rademacher complexity and the generalization error of residual networks
Cites work
- scientific article; zbMATH DE number 48727 (Why is no real title available?)
- scientific article; zbMATH DE number 477682 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 1785556 (Why is no real title available?)
- 10.1162/153244303321897690
- A priori estimates of the population risk for two-layer neural networks
- Approximation and estimation bounds for artificial neural networks
- Bounds on rates of variable-basis and neural-network approximation
- Breaking the curse of dimensionality with convex neural networks
- On the tractability of multivariate integration and approximation by neural networks
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- The finite element methods for elliptic problems.
- Theory of Reproducing Kernels
- Understanding machine learning. From theory to algorithms
- Universal approximation bounds for superpositions of a sigmoidal function
Cited in
(37)- Applied harmonic analysis and data science. Abstracts from the workshop held April 21--26, 2024
- Low-rank kernel approximation of Lyapunov functions using neural networks
- Weighted variation spaces and approximation by shallow ReLU networks
- Numerical solution of Poisson partial differential equation in high dimension using two-layer neural networks
- Improved Analysis of PINNs: Alleviate the CoD for Compositional Solutions
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Learning High-Dimensional McKean–Vlasov Forward-Backward Stochastic Differential Equations with General Distribution Dependence
- Simultaneous neural network approximation for smooth functions
- Learning the mapping \(\mathbf{x}\mapsto \sum\limits_{i=1}^d x_i^2\): the cost of finding the needle in a haystack
- Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
- A kernel framework for learning differential equations and their solution operators
- Efficient and stable SAV-based methods for gradient flows arising from deep learning
- Deep Ritz method for the spectral fractional Laplacian equation using the Caffarelli-Silvestre extension
- Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions
- Operator learning using random features: a tool for scientific computing
- Generalization error in the deep Ritz method with smooth activation functions
- Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
- Recovering the source term in elliptic equation via deep learning: method and convergence analysis
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Finite difference schemes for time-space fractional diffusion equations in one- and two-dimensions
- Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
- Nonconvex regularization for sparse neural networks
- A Reduced Order Schwarz Method for Nonlinear Multiscale Elliptic Equations Based on Two-Layer Neural Networks
- Greedy training algorithms for neural networks and applications to PDEs
- Understanding neural networks with reproducing kernel Banach spaces
- A finite difference scheme for the two-dimensional Gray-Scott equation with fractional Laplacian
- A two-branch symmetric domain adaptation neural network based on Ulam stability theory
- Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems
- Two-Layer Neural Networks with Values in a Banach Space
- A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
- Deep learning methods for partial differential equations and related parameter identification problems
- Active learning based sampling for high-dimensional nonlinear partial differential equations
- The discovery of dynamics via linear multistep methods and deep learning: error estimation
- Control of neural transport for normalising flows
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Causal inference of general treatment effects using neural networks with a diverging number of confounders
- A class of dimension-free metrics for the convergence of empirical measures
This page was built for publication: The Barron space and the flow-induced function spaces for neural network models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117337)