Nonlinear approximation via compositions (Q2185653): Difference between revisions
From MaRDI portal
Added link to MaRDI item. |
Changed an Item |
||
Property / describes a project that uses | |||
Property / describes a project that uses: AdaGrad / rank | |||
Normal rank |
Revision as of 06:16, 28 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Nonlinear approximation via compositions |
scientific article |
Statements
Nonlinear approximation via compositions (English)
0 references
5 June 2020
0 references
Given a function dictionary D and an approximation budget \(N \in \mathbb{N}\), nonlinear approximation seeks the linear combination of the best \(N\) terms \(\{T_n\}, 1\le n\le N\subseteq D\) to approximate a given function \(f\) with the minimum approximation error. Motivated by recent success of deep learning, authors propose dictionaries with functions in a form of compositions, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. They further quantify the improvement of the best \(N\)-term approximation rate in terms of \(N\) when \(L\). Finally, they show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.
0 references
deep neural networks
0 references
ReLU activation function
0 references
nonlinear approximation
0 references
function composition
0 references
Hölder continuity
0 references
parallel computing
0 references