Approximation by neural networks with a bounded number of nodes at each level (Q1395810): Difference between revisions
From MaRDI portal
Set profile property. |
ReferenceBot (talk | contribs) Changed an Item |
||
Property / cites work | |||
Property / cites work: Error bounds for approximation with neural networks / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Approximation of continuous functions of several variables by an arbitrary nonlinear continuous function of one variable, linear functions, and their superpositions / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4320142 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Lower bounds for approximation by MLP neural networks / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Uniform approximation by neural networks / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4938227 / rank | |||
Normal rank |
Revision as of 17:14, 5 June 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Approximation by neural networks with a bounded number of nodes at each level |
scientific article |
Statements
Approximation by neural networks with a bounded number of nodes at each level (English)
0 references
1 July 2003
0 references
The paper deals with approximation by neural networks in the space \(C\) of continuous functions \(f: {\mathbb R}^d \to {\mathbb R}^{d'}\), in the topology of uniform convergence on compacta. It is well-known that the set of all neural networks with one hidden layer is dense in \(C\) if and only if the activation function \(\sigma\) is not a polynomial; the number of nodes in the network is allowed to be arbitrarily large. In the paper under review the density is established for the set of all multilayer networks with at most \(d+d'+2\) nodes in each layer. The restriction on \(\sigma\) is even weaker - it should not be linear - but the number of layers is allowed to be arbitrarily large.
0 references
multilayer
0 references
neural
0 references
density
0 references