Nonlinear approximation via compositions (Q2185653): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / Wikidata QID
 
Property / Wikidata QID: Q92547051 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1902.10170 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Saturation classes for MAX-product neural network operators activated by sigmoidal functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence for a family of neural network operators in Orlicz spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation results in Orlicz spaces for sequences of Kantorovich MAX-product neural network operators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Ten Lectures on Wavelets / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4215356 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation using scattered shifts of a multivariate function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Compressed sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5396673 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Neocognition: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonlinear approximation using Gaussian kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multilayer feedforward networks are universal approximators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Efficient distribution-free learning of probabilistic concepts / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation of functions of finite variation by superpositions of a sigmoidal function. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Almost optimal estimates for approximation and learning by radial basis function networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Constructive approximate interpolation by neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Matching pursuits with time-frequency dictionaries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal approximation of piecewise smooth functions using deep ReLU neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multivariate \(n\)-term rational and piecewise polynomial approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Exponential convergence of the deep neural network approximation for analytic functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: The rate of approximation of Gaussian radial basis neural networks in continuous function space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximations with deep ReLU networks / rank
 
Normal rank

Latest revision as of 20:32, 22 July 2024

scientific article
Language Label Description Also known as
English
Nonlinear approximation via compositions
scientific article

    Statements

    Nonlinear approximation via compositions (English)
    0 references
    0 references
    0 references
    0 references
    5 June 2020
    0 references
    Given a function dictionary D and an approximation budget \(N \in \mathbb{N}\), nonlinear approximation seeks the linear combination of the best \(N\) terms \(\{T_n\}, 1\le n\le N\subseteq D\) to approximate a given function \(f\) with the minimum approximation error. Motivated by recent success of deep learning, authors propose dictionaries with functions in a form of compositions, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. They further quantify the improvement of the best \(N\)-term approximation rate in terms of \(N\) when \(L\). Finally, they show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.
    0 references
    deep neural networks
    0 references
    ReLU activation function
    0 references
    nonlinear approximation
    0 references
    function composition
    0 references
    Hölder continuity
    0 references
    parallel computing
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references