Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Importer (talk | contribs)
Changed an Item
Property / review text
 
In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
Property / review text: In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Florin Gorunescu / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 68T05 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 62H30 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6037842 / rank
 
Normal rank
Property / zbMATH Keywords
 
learning theory
Property / zbMATH Keywords: learning theory / rank
 
Normal rank
Property / zbMATH Keywords
 
Bayes optimal rule
Property / zbMATH Keywords: Bayes optimal rule / rank
 
Normal rank
Property / zbMATH Keywords
 
linear classification
Property / zbMATH Keywords: linear classification / rank
 
Normal rank
Property / zbMATH Keywords
 
hidden variables
Property / zbMATH Keywords: hidden variables / rank
 
Normal rank

Revision as of 21:39, 29 June 2023

scientific article
Language Label Description Also known as
English
Linear classifiers are nearly optimal when hidden variables have diverse effects
scientific article

    Statements

    Linear classifiers are nearly optimal when hidden variables have diverse effects (English)
    0 references
    0 references
    0 references
    23 May 2012
    0 references
    In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
    0 references
    0 references
    learning theory
    0 references
    Bayes optimal rule
    0 references
    linear classification
    0 references
    hidden variables
    0 references