Knowing what doesn't matter: exploiting the omission of irrelevant data
From MaRDI portal
Publication:1127363
DOI10.1016/S0004-3702(97)00048-9zbMath0904.68141MaRDI QIDQ1127363
Alexander Kogan, Russell Greiner, Adam J. Grove
Publication date: 13 August 1998
Published in: Artificial Intelligence (Search for Journal in Brave)
decision trees; diagnosis; theory revision; DNF; learnability; adversarial noise; blocked attributes; irrelevant values
68T05: Learning and adaptive systems in artificial intelligence
68W10: Parallel algorithms in computer science
Related Items
Partial observability and learnability, Selection of relevant features and examples in machine learning, Knowing what doesn't matter: exploiting the omission of irrelevant data, The complexity of theory revision, Logical analysis of binary data with missing bits, Information-theoretic approaches to SVM feature selection for metagenome read classification, Learning cost-sensitive active classifiers
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Teaching a smarter learner.
- Knowing what doesn't matter: exploiting the omission of irrelevant data
- Equivalence of models for polynomial learnability
- Learning Boolean functions in an infinite attribute space
- The complexity of theory revision
- Learning decision trees from random examples
- Learning in the presence of finitely or infinitely many irrelevant attributes
- Can PAC learning algorithms tolerate random attribute noise?
- On the learnability of discrete distributions
- Learning in the Presence of Malicious Errors
- A theory of the learnable
- A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations