On the perceptron's compression
From MaRDI portal
Publication:2106618
Recommendations
Cites work
- scientific article; zbMATH DE number 5957338 (Why is no real title available?)
- scientific article; zbMATH DE number 774005 (Why is no real title available?)
- scientific article; zbMATH DE number 3189712 (Why is no real title available?)
- 10.1162/15324430260185600
- 10.1162/153244303321897681
- Algorithmic Learning Theory
- An algorithmic theory of learning: Robust concepts and random projection
- Boosting a weak learning algorithm by majority
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Extensions of Lipschitz mappings into a Hilbert space
- Large margin classification using the perceptron algorithm
- Learning Theory
- Learning the unlearnable
- Noise tolerant variants of the perceptron algorithm
- On the Generalization Ability of On-Line Learning Algorithms
- On variants of the Johnson–Lindenstrauss lemma
- PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification
- Pegasos: primal estimated sub-gradient solver for SVM
- Sample Compression Schemes for VC Classes
- The implicit bias of gradient descent on separable data
- Understanding machine learning. From theory to algorithms
This page was built for publication: On the perceptron's compression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2106618)