Bounds on the learning capacity of some multi-layer networks (Q1115371)

From MaRDI portal
Revision as of 09:54, 30 July 2024 by Openalex240730090724 (talk | contribs) (Set OpenAlex properties.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Bounds on the learning capacity of some multi-layer networks
scientific article

    Statements

    Bounds on the learning capacity of some multi-layer networks (English)
    0 references
    0 references
    0 references
    1989
    0 references
    We obtain bounds for the capacity of some multi-layer networks of linear threshold units. In the case of a network having n inputs, a single layer of h hidden units and an output layer of s units, where all the weights in the network are variable and \(s\leq h\leq n\), the capacity m satisfies 2n\(\leq m\leq nt \log t\), where \(t=1+h/s.\) We consider in more detail the case where there is a single output that is a fixed boolean function of the hidden units. In this case our upper bound is of order nh log h but the argument which provided the lower bound of 2n no longer applies. However, by explicit computation in low dimensional cases we show that the capacity exceeds 2n but is substantially less than the upper bound. Finally, we describe a learning algorithm for multi-layer networks with a single output unit. This greatly outperforms back propagation at the task of learning random vectors and provides further empirical evidence that the lower bound of 2n can be exceeded.
    0 references
    capacity bounds
    0 references
    multi-layer networks of linear threshold units
    0 references
    learning algorithm
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references