Self-improved gaps almost everywhere for the agnostic approximation of monomials
From MaRDI portal
Publication:884469
DOI10.1016/j.tcs.2007.02.023zbMath1117.68040OpenAlexW2135781605MaRDI QIDQ884469
Publication date: 6 June 2007
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2007.02.023
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the limits of proper learnability of subclasses of DNF formulas
- Maximizing agreements and coagnostic learning
- Toward efficient agnostic learning
- A decision-theoretic generalization of on-line learning and an application to boosting
- Complexity in the case against accuracy estimation
- Clique is hard to approximate within \(n^{1-\epsilon}\)
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Robust trainability of single neurons
- Lower bounds on learning decision lists and trees
- Improved boosting algorithms using confidence-rated predictions
- Very simple classification rules perform well on most commonly used datasets
- Linear degree extractors and the inapproximability of max clique and chromatic number
This page was built for publication: Self-improved gaps almost everywhere for the agnostic approximation of monomials