Nonlinear approximation via compositions (Q2185653)

From MaRDI portal
Revision as of 00:19, 22 March 2024 by Daniel (talk | contribs) (‎Created claim: Wikidata QID (P12): Q92547051, #quickstatements; #temporary_batch_1711055989931)
scientific article
Language Label Description Also known as
English
Nonlinear approximation via compositions
scientific article

    Statements

    Nonlinear approximation via compositions (English)
    0 references
    0 references
    0 references
    0 references
    5 June 2020
    0 references
    Given a function dictionary D and an approximation budget \(N \in \mathbb{N}\), nonlinear approximation seeks the linear combination of the best \(N\) terms \(\{T_n\}, 1\le n\le N\subseteq D\) to approximate a given function \(f\) with the minimum approximation error. Motivated by recent success of deep learning, authors propose dictionaries with functions in a form of compositions, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. They further quantify the improvement of the best \(N\)-term approximation rate in terms of \(N\) when \(L\). Finally, they show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.
    0 references
    deep neural networks
    0 references
    ReLU activation function
    0 references
    nonlinear approximation
    0 references
    function composition
    0 references
    Hölder continuity
    0 references
    parallel computing
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references