Linear classifiers are nearly optimal when hidden variables have diverse effects
From MaRDI portal
Publication:420914
DOI10.1007/s10994-011-5262-7zbMath1267.68175OpenAlexW2066953979MaRDI QIDQ420914
Nader H. Bshouty, Philip M. Long
Publication date: 23 May 2012
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-011-5262-7
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Linear classifiers are nearly optimal when hidden variables have diverse effects
- Pegasos: primal estimated sub-gradient solver for SVM
- On the optimality of the simple Bayesian classifier under zero-one loss
- BoosTexter: A boosting-based system for text categorization
- Latent semantic indexing: A probabilistic analysis
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Probably almost Bayes decisions
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Every linear threshold function has a low-weight approximator
- Classification using hierarchical Naïve Bayes models
- Evolutionary Trees Can be Learned in Polynomial Time in the Two-State General Markov Model
- 10.1162/jmlr.2003.3.4-5.993
- Balls and bins: A study in negative dependence
- Probability Inequalities for Sums of Bounded Random Variables
- Convexity, Classification, and Risk Bounds
- Convergence of stochastic processes
- Unsupervised learning by probabilistic latent semantic analysis
This page was built for publication: Linear classifiers are nearly optimal when hidden variables have diverse effects