Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/s10994-011-5262-7 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2066953979 / rank | |||
Normal rank |
Revision as of 20:21, 19 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Linear classifiers are nearly optimal when hidden variables have diverse effects |
scientific article |
Statements
Linear classifiers are nearly optimal when hidden variables have diverse effects (English)
0 references
23 May 2012
0 references
In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
0 references
learning theory
0 references
Bayes optimal rule
0 references
linear classification
0 references
hidden variables
0 references