A multivariate Riesz basis of ReLU neural networks (Q6144893): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Normalize DOI.
 
(3 intermediate revisions by 3 users not shown)
Property / DOI
 
Property / DOI: 10.1016/j.acha.2023.101605 / rank
Normal rank
 
Property / OpenAlex ID
 
Property / OpenAlex ID: W4387817772 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Approximation with Sparsely Connected Deep Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5190154 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Frames and the projection method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Frames containing a Riesz basis and approximation of the frame coefficients using finite-dimensional methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: An introduction to frames and Riesz bases / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonlinear approximation and (deep) ReLU networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural network approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Deep ReLU neural networks in high-dimensional approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Deep Neural Network Approximation Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation spaces of deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Foundations of time-frequency analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Matrix Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: New characterizations of Riesz bases / rank
 
Normal rank
Property / cites work
 
Property / cites work: DeepStack: Expert-level artificial intelligence in heads-up no-limit poker / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal approximation of piecewise smooth functions using deep ReLU neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonparametric regression using deep neural networks with ReLU activation function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3735790 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximations with deep ReLU networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3955832 / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1016/J.ACHA.2023.101605 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 18:53, 30 December 2024

scientific article; zbMATH DE number 7796953
Language Label Description Also known as
English
A multivariate Riesz basis of ReLU neural networks
scientific article; zbMATH DE number 7796953

    Statements

    A multivariate Riesz basis of ReLU neural networks (English)
    0 references
    0 references
    0 references
    30 January 2024
    0 references
    Artificial neural networks have in the last 10 years been responsible for the enormous success in answering many questions in approximation theory and other learning tasks in a vast number of scientific areas, for example computer vision, speech recognition, natural language processing, game theory, signal processing, neuroscience, social sciences and many others. This paper contributes to a theoretical framework for understanding neural networks in the following sense. An extremely important property of artificial neural networks is that they can approximate extremely well functions of many variables, which often allows us to avoid the curse of dimensionality. The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. The aim of the work in this paper is to understand the possibility to use artificial neural networks for the approximation of multivariate functions by constructing a new system of ReLU neural networks which forms a Riesz basis of the space \(L_2([0, 1]^d)\) for every \(d\geq 1\). In order to do this, the authors consider the trigonometric-like system of piecewise linear functions introduced recently by Daubechies, DeVore, Foucart, Hanin, and Petrova and use Gershgorin's theorem to prove that these indeed provide a Riesz basis of \(L_2([0, 1])\). The authors then generalize this system to large dimensions \(d>1\) and do this by avoiding tensor products. The paper is well written with a good set of references.
    0 references
    Riesz basis
    0 references
    rectified linear unit
    0 references
    artificial neural networks
    0 references
    Euler product
    0 references
    Möbius function
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references