Gaining degrees of freedom in subsymbolic learning (Q5941074): Difference between revisions
From MaRDI portal
ReferenceBot (talk | contribs) Changed an Item |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/s0304-3975(99)00289-3 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2058481336 / rank | |||
Normal rank |
Latest revision as of 11:02, 30 July 2024
scientific article; zbMATH DE number 1635239
Language | Label | Description | Also known as |
---|---|---|---|
English | Gaining degrees of freedom in subsymbolic learning |
scientific article; zbMATH DE number 1635239 |
Statements
Gaining degrees of freedom in subsymbolic learning (English)
0 references
20 August 2001
0 references
We provide some theoretical results on sample complexity of PAC learning when the hypotheses are given by subsymbolical devices such as neural networks. In this framework we give new foundations to the notion of degrees of freedom of a statistic and relate it to the complexity of a concept class. Thus, for a given concept class and a given sample size, we discuss the efficiency of subsymbolical learning algorithms in terms of degrees of freedom of the computed statistic. In this setting we appraise the sample complexity overhead coming from relying on approximate hypotheses and display an increase in the degrees of freedom yield by embedding available formal knowledge into the algorithm. For known sample distribution, these quantities are related to the learning approximation goal and a special production prize is shown. Finally, we prove that testing the approximation capability of a neural network generally demands smaller sample size than training it.
0 references
computational learning
0 references
sentry functions
0 references
nested concept classes
0 references
approximate learning
0 references
neural networks
0 references
0 references
0 references