Gaining degrees of freedom in subsymbolic learning (Q5941074)

From MaRDI portal
Revision as of 11:02, 30 July 2024 by Openalex240730090724 (talk | contribs) (Set OpenAlex properties.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article; zbMATH DE number 1635239
Language Label Description Also known as
English
Gaining degrees of freedom in subsymbolic learning
scientific article; zbMATH DE number 1635239

    Statements

    Gaining degrees of freedom in subsymbolic learning (English)
    0 references
    0 references
    0 references
    20 August 2001
    0 references
    We provide some theoretical results on sample complexity of PAC learning when the hypotheses are given by subsymbolical devices such as neural networks. In this framework we give new foundations to the notion of degrees of freedom of a statistic and relate it to the complexity of a concept class. Thus, for a given concept class and a given sample size, we discuss the efficiency of subsymbolical learning algorithms in terms of degrees of freedom of the computed statistic. In this setting we appraise the sample complexity overhead coming from relying on approximate hypotheses and display an increase in the degrees of freedom yield by embedding available formal knowledge into the algorithm. For known sample distribution, these quantities are related to the learning approximation goal and a special production prize is shown. Finally, we prove that testing the approximation capability of a neural network generally demands smaller sample size than training it.
    0 references
    computational learning
    0 references
    sentry functions
    0 references
    nested concept classes
    0 references
    approximate learning
    0 references
    neural networks
    0 references

    Identifiers