Representation formulas and pointwise properties for Barron functions
DOI10.1007/S00526-021-02156-6zbMATH Open1482.41013arXiv2006.05982OpenAlexW4226239178WikidataQ113904947 ScholiaQ113904947MaRDI QIDQ2113295FDOQ2113295
Authors: Stephan Wojtowytsch, Weinan E
Publication date: 14 March 2022
Published in: Calculus of Variations and Partial Differential Equations (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.05982
Recommendations
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Understanding neural networks with reproducing kernel Banach spaces
- Provable approximation properties for deep neural networks
- Banach space representer theorems for neural networks and ridge splines
- A global universality of two-layer neural networks with ReLU activations
Artificial neural networks and deep learning (68T07) Approximation by other special function classes (41A30) Banach spaces of continuous, differentiable or analytic functions (46E15) Special properties of functions of several variables, Hölder conditions, etc. (26B35) Representation and superposition of functions (26B40)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Measure theory and fine properties of functions
- Functional analysis, Sobolev spaces and partial differential equations
- The Variational Formulation of the Fokker--Planck Equation
- Title not available (Why is that?)
- Applied functional analysis. Functional analysis, Sobolev spaces and elliptic differential equations
- Approximation by superpositions of a sigmoidal function
- Hinging hyperplanes for regression, classification, and function approximation
- Variational Analysis in Sobolev and BV Spaces
- Wahrscheinlichkeitstheorie
- A priori estimates of the population risk for two-layer neural networks
- A mean field view of the landscape of two-layer neural networks
- Breaking the curse of dimensionality with convex neural networks
- Machine learning from a continuous viewpoint. I
- Mean field analysis of neural networks: a law of large numbers
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
Cited In (20)
- Understanding neural networks with reproducing kernel Banach spaces
- Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems
- Two-Layer Neural Networks with Values in a Banach Space
- The Barron space and the flow-induced function spaces for neural network models
- A convergent deep learning algorithm for approximation of polynomials
- Numerical analysis of physics-informed neural networks and related models in physics-informed machine learning
- A New Function Space from Barron Class and Application to Neural Network Approximation
- Piecewise linear functions representable with infinite width shallow ReLU neural networks
- Neural network approximation and estimation of classifiers with classification boundary in a Barron class
- A priori generalization error analysis of two-layer neural networks for solving high dimensional Schrödinger eigenvalue problems
- Deep Ritz method for the spectral fractional Laplacian equation using the Caffarelli-Silvestre extension
- Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
- Numerical solution of Poisson partial differential equation in high dimension using two-layer neural networks
- Uniform approximation rates and metric entropy of shallow neural networks
- Embeddings between Barron spaces with higher-order activation functions
- Nonlinear weighted directed acyclic graph and a priori estimates for neural networks
- Applied harmonic analysis and data science. Abstracts from the workshop held April 21--26, 2024
- Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks
- Solving the regularized Schamel equation by the singular planar dynamical system method and the deep learning method
- Greedy training algorithms for neural networks and applications to PDEs
This page was built for publication: Representation formulas and pointwise properties for Barron functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2113295)