Approximation by neural networks with a bounded number of nodes at each level (Q1395810): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claims
Set OpenAlex properties.
 
(3 intermediate revisions by 3 users not shown)
Property / author
 
Property / author: Gustaf Gripenberg / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Yu. I. Makovoz / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximation with neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation of continuous functions of several variables by an arbitrary nonlinear continuous function of one variable, linear functions, and their superpositions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4320142 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lower bounds for approximation by MLP neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Uniform approximation by neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4938227 / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/s0021-9045(03)00078-9 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2036450593 / rank
 
Normal rank

Latest revision as of 08:23, 30 July 2024

scientific article
Language Label Description Also known as
English
Approximation by neural networks with a bounded number of nodes at each level
scientific article

    Statements

    Approximation by neural networks with a bounded number of nodes at each level (English)
    0 references
    1 July 2003
    0 references
    The paper deals with approximation by neural networks in the space \(C\) of continuous functions \(f: {\mathbb R}^d \to {\mathbb R}^{d'}\), in the topology of uniform convergence on compacta. It is well-known that the set of all neural networks with one hidden layer is dense in \(C\) if and only if the activation function \(\sigma\) is not a polynomial; the number of nodes in the network is allowed to be arbitrarily large. In the paper under review the density is established for the set of all multilayer networks with at most \(d+d'+2\) nodes in each layer. The restriction on \(\sigma\) is even weaker - it should not be linear - but the number of layers is allowed to be arbitrarily large.
    0 references
    multilayer
    0 references
    neural
    0 references
    density
    0 references
    0 references

    Identifiers