Linear classifiers are nearly optimal when hidden variables have diverse effects (Q420914): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Normalize DOI.
 
(8 intermediate revisions by 7 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s10994-011-5262-7 / rank
Normal rank
 
Property / review text
 
In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
Property / review text: In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Florin Gorunescu / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 68T05 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 62H30 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6037842 / rank
 
Normal rank
Property / zbMATH Keywords
 
learning theory
Property / zbMATH Keywords: learning theory / rank
 
Normal rank
Property / zbMATH Keywords
 
Bayes optimal rule
Property / zbMATH Keywords: Bayes optimal rule / rank
 
Normal rank
Property / zbMATH Keywords
 
linear classification
Property / zbMATH Keywords: linear classification / rank
 
Normal rank
Property / zbMATH Keywords
 
hidden variables
Property / zbMATH Keywords: hidden variables / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: Pegasos / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: BoosTexter / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10994-011-5262-7 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2066953979 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probably almost Bayes decisions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convexity, Classification, and Risk Bounds / rank
 
Normal rank
Property / cites work
 
Property / cites work: Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations / rank
 
Normal rank
Property / cites work
 
Property / cites work: 10.1162/jmlr.2003.3.4-5.993 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Linear classifiers are nearly optimal when hidden variables have diverse effects / rank
 
Normal rank
Property / cites work
 
Property / cites work: Evolutionary Trees Can be Learned in Polynomial Time in the Two-State General Markov Model / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4881152 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the optimality of the simple Bayesian classifier under zero-one loss / rank
 
Normal rank
Property / cites work
 
Property / cites work: Balls and bins: A study in negative dependence / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2707395 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probability Inequalities for Sums of Bounded Random Variables / rank
 
Normal rank
Property / cites work
 
Property / cites work: Unsupervised learning by probabilistic latent semantic analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Classification using hierarchical Naïve Bayes models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5639147 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Latent semantic indexing: A probabilistic analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4780802 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of stochastic processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: BoosTexter: A boosting-based system for text categorization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3140437 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Every linear threshold function has a low-weight approximator / rank
 
Normal rank
Property / cites work
 
Property / cites work: Pegasos: primal estimated sub-gradient solver for SVM / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093200 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical behavior and consistency of classification methods based on convex risk minimization. / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S10994-011-5262-7 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 17:03, 9 December 2024

scientific article
Language Label Description Also known as
English
Linear classifiers are nearly optimal when hidden variables have diverse effects
scientific article

    Statements

    Linear classifiers are nearly optimal when hidden variables have diverse effects (English)
    0 references
    0 references
    0 references
    23 May 2012
    0 references
    In this paper the authors focus on showing that a linear classifier can provide a good approximation even if the optimal classifier is much more complex. To prove this hypothesis, they analyze a classification problem in which data is generated by a two-tiered random process. Concretely, they prove that, if the hidden variables have non-eligible effects on many observed variables, a linear classifier accurately approximates the error rate of the optimal classifier (Bayes). Moreover, the hinge loss of the linear classifier is not much more than the Bayes error rate.
    0 references
    learning theory
    0 references
    Bayes optimal rule
    0 references
    linear classification
    0 references
    hidden variables
    0 references

    Identifiers