Piecewise linear functions representable with infinite width shallow ReLU neural networks
From MaRDI portal
Publication:6052562
DOI10.1090/bproc/186arXiv2307.14373OpenAlexW4386957178MaRDI QIDQ6052562
Publication date: 17 October 2023
Published in: Proceedings of the American Mathematical Society, Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2307.14373
Cites Work
- Harmonic analysis of neural networks
- Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation
- Representation formulas and pointwise properties for Barron functions
- A priori estimates of the population risk for two-layer neural networks
- Neural network with unbounded activation functions is universal approximator
- Relu Deep Neural Networks and Linear Finite Elements
- Large Intersections of Large Sets
- On the σ-class generated by open balls
- Breaking the Curse of Dimensionality with Convex Neural Networks
- The Ridgelet transform of distributions
- Approximation by superpositions of a sigmoidal function
- Probability theory. A comprehensive course
This page was built for publication: Piecewise linear functions representable with infinite width shallow ReLU neural networks