The Barron space and the flow-induced function spaces for neural network models
From MaRDI portal
Publication:2117337
DOI10.1007/S00365-021-09549-YzbMATH Open1490.65020arXiv1906.08039OpenAlexW3165099133MaRDI QIDQ2117337FDOQ2117337
Publication date: 21 March 2022
Published in: Constructive Approximation (Search for Journal in Brave)
Abstract: One of the key issues in the analysis of machine learning models is to identify the appropriate function space and norm for the model. This is the set of functions endowed with a quantity which can control the approximation and estimation errors by a particular machine learning model. In this paper, we address this issue for two representative neural network models: the two-layer networks and the residual neural networks. We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space. For residual neural network models, we construct the so-called flow-induced function space, and prove direct and inverse approximation theorems for this space. In addition, we show that the Rademacher complexity for bounded sets under these norms has the optimal upper bounds.
Full work available at URL: https://arxiv.org/abs/1906.08039
Recommendations
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Understanding neural networks with reproducing kernel Banach spaces
- Representation formulas and pointwise properties for Barron functions
- Approximation spaces of deep neural networks
- Rademacher complexity and the generalization error of residual networks
Artificial neural networks and deep learning (68T07) Algorithms for approximation of functions (65D15)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Theory of Reproducing Kernels
- Title not available (Why is that?)
- Title not available (Why is that?)
- Understanding Machine Learning
- Title not available (Why is that?)
- The finite element methods for elliptic problems.
- 10.1162/153244303321897690
- Title not available (Why is that?)
- On the tractability of multivariate integration and approximation by neural networks
- Approximation and estimation bounds for artificial neural networks
- Bounds on rates of variable-basis and neural-network approximation
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- A priori estimates of the population risk for two-layer neural networks
- Breaking the Curse of Dimensionality with Convex Neural Networks
Cited In (36)
- Deep learning methods for partial differential equations and related parameter identification problems
- Understanding neural networks with reproducing kernel Banach spaces
- Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems
- Deep Ritz Method for the Spectral Fractional Laplacian Equation Using the Caffarelli--Silvestre Extension
- Two-Layer Neural Networks with Values in a Banach Space
- Improved Analysis of PINNs: Alleviate the CoD for Compositional Solutions
- Nonconvex regularization for sparse neural networks
- Operator learning using random features: a tool for scientific computing
- Generalization error in the deep Ritz method with smooth activation functions
- Recovering the source term in elliptic equation via deep learning: method and convergence analysis
- Low-rank kernel approximation of Lyapunov functions using neural networks
- Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Finite difference schemes for time-space fractional diffusion equations in one- and two-dimensions
- A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
- Learning High-Dimensional McKean–Vlasov Forward-Backward Stochastic Differential Equations with General Distribution Dependence
- Control of neural transport for normalising flows
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Causal inference of general treatment effects using neural networks with a diverging number of confounders
- A class of dimension-free metrics for the convergence of empirical measures
- Numerical solution of Poisson partial differential equation in high dimension using two-layer neural networks
- A Reduced Order Schwarz Method for Nonlinear Multiscale Elliptic Equations Based on Two-Layer Neural Networks
- A finite difference scheme for the two-dimensional Gray-Scott equation with fractional Laplacian
- Deep Adaptive Basis Galerkin Method for High-Dimensional Evolution Equations With Oscillatory Solutions
- Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
- Active learning based sampling for high-dimensional nonlinear partial differential equations
- Approximation results for gradient flow trained shallow neural networks in \(1d\)
- Weighted variation spaces and approximation by shallow ReLU networks
- Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
- The Discovery of Dynamics via Linear Multistep Methods and Deep Learning: Error Estimation
- A kernel framework for learning differential equations and their solution operators
- Efficient and stable SAV-based methods for gradient flows arising from deep learning
- Simultaneous neural network approximation for smooth functions
- Applied harmonic analysis and data science. Abstracts from the workshop held April 21--26, 2024
- Greedy training algorithms for neural networks and applications to PDEs
- A two-branch symmetric domain adaptation neural network based on Ulam stability theory
This page was built for publication: The Barron space and the flow-induced function spaces for neural network models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117337)