Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914)

From MaRDI portal
Revision as of 20:21, 19 March 2024 by Openalex240319060354 (talk | contribs) (Set OpenAlex properties.)
scientific article
Language Label Description Also known as
English
Linear classifiers are nearly optimal when hidden variables have diverse effects
scientific article

    Statements

    Linear classifiers are nearly optimal when hidden variables have diverse effects (English)
    0 references
    0 references
    0 references
    23 May 2012
    0 references
    In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
    0 references
    0 references
    learning theory
    0 references
    Bayes optimal rule
    0 references
    linear classification
    0 references
    hidden variables
    0 references
    0 references