Approximation spaces of deep neural networks (Q2117336): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Import241208061232 (talk | contribs)
Normalize DOI.
 
(11 intermediate revisions by 7 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s00365-021-09543-4 / rank
Normal rank
 
Property / author
 
Property / author: Morten Nielsen / rank
Normal rank
 
Property / author
 
Property / author: Morten Nielsen / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: ImageNet / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: U-Net / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: AlexNet / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: VAMPnets / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2943191253 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1905.01208 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5318420 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Solving ill-posed inverse problems using iterative deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5386174 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalized approximation spaces and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation and estimation bounds for artificial neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5381117 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Solving the Kolmogorov PDE by means of deep learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Approximation with Sparsely Connected Deep Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning the invisible: a hybrid deep learning-shearlet framework for limited angle computed tomography / rank
 
Normal rank
Property / cites work
 
Property / cites work: Affine systems that span Lebesgue spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neural Networks for Localized Approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4215356 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4365433 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Interpolation of Besov Spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Besov Spaces on Domains in ℝ d / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4273944 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4310147 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Maß- und Integrationstheorie / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4255465 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4888829 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A mathematical introduction to compressive sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5614003 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multilayer feedforward networks are universal approximators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4118037 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Affine synthesis onto \(L^p\) when \(0< p \leq 1\) / rank
 
Normal rank
Property / cites work
 
Property / cites work: Calculus With Applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lower bounds for approximation by MLP neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: A logical calculus of the ideas immanent in nervous activity / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation properties of a multilayered feedforward artificial neural network / rank
 
Normal rank
Property / cites work
 
Property / cites work: Deep vs. shallow networks: An approximation theory perspective / rank
 
Normal rank
Property / cites work
 
Property / cites work: Degree of approximation by neural and translation networks with a single hidden layer / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation of functions and their derivatives: A neural network implementation with applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal approximation of piecewise smooth functions using deep ReLU neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3739597 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4938227 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4340096 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonparametric regression using deep neural networks with ReLU activation function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Provable approximation properties for deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Topology with Applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Wavelet coorbit spaces viewed as decomposition spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximations with deep ReLU networks / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S00365-021-09543-4 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 03:07, 17 December 2024

scientific article
Language Label Description Also known as
English
Approximation spaces of deep neural networks
scientific article

    Statements

    Approximation spaces of deep neural networks (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    21 March 2022
    0 references
    A formal representation of a deep neural network could be conceived as a tuple $\Phi= ((T_1,\alpha _1),\dots, (T_L,\alpha_L))$, where $T_{\mathit{l}}$ are affine-linear maps, $T_{\mathit{l}}(x)=A_{\mathit{l}}x+b_{\mathit{l}}$, $A_{\mathit{l}}$ are matrices and $b_{\mathit{l}}$ vectors, $\alpha_{\mathit{l}}$ are some nonlinearities and \(L\) denotes the number of layers in the network. One defines, as realization of the deep neural network $\Phi$, the function \[\mathcal{R}(\Phi):=\alpha_L \circ T_L\circ \cdots \circ \alpha_1 \circ T_1\] and is implemented by applying the maps layer-wise. The central task of a neural network is in general the approximation of a function \(f\), given a set of training data ($x_i, f(x_i)$. One defines a loss function $\mathcal{L}$, a regulariser $\mathcal{P}$ and the objective is to solve the optimization problem: find a neural network structure $\Phi$, such that $\sum_{i=1}^{m}\mathcal{L}(\mathcal{R}(\Phi)(x_i,f(x_i)) + \lambda\mathcal{P}(\Phi)$ gets minimized. The objective it to achieve a best possible approximation for \(f\). The aim of the article is to introduce and investigate approximation spaces associated with neural networks. One expects that the results will have an impact on domains such as theory of expressivity, statistical analysis of deep learning or design of deep neural networks. The second section of the article is devoted to the definition of neural networks and elementary properties. In the third section one introduces classical approximation spaces the way they are described in Chapter 7 of the book [\textit{R. A. DeVore} and \textit{G. G. Lorentz}, Constructive approximation. Berlin: Springer-Verlag (1993; Zbl 0797.41016)]. By suitable specialization, these spaces are then used in the context of neural networks as neural network approximation spaces. In subsections one concentrates for instance on connectivity versus number of neurons and on relations between approximation classes associated with different depth growth functions. One points out the importance of the choice of the activation function on different approximation spaces. The fourth section is devoted mainly to an investigation on approximation spaces of the ReLU networks. Embeddings between Besov spaces and neural network approximation spaces, direct estimates and inverse estimates, are largely discussed in the fifth section. Additional details and proofs are given in the Appendix A, A1--A11 for Section 2, Appendix B, B1--B4 for Section 3, Appendix C, C1--C4 for Section 4, Appendix D, D1--D5 for Section5 and Appendix E. The appendix part extends over almost the half of the article. References include 69 titles.
    0 references
    deep neural networks
    0 references
    sparsely connected networks
    0 references
    approximation spaces
    0 references
    Besov spaces
    0 references
    direct estimates
    0 references
    inverse estimates
    0 references
    piecewise polynomials
    0 references
    ReLU activation function
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references