On the perceptron's compression
From MaRDI portal
Publication:2106618
DOI10.1007/978-3-030-51466-2_29OpenAlexW3037931094MaRDI QIDQ2106618FDOQ2106618
Authors: Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff
Publication date: 16 December 2022
Full work available at URL: https://arxiv.org/abs/1806.05403
Recommendations
Cites Work
- Pegasos: primal estimated sub-gradient solver for SVM
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Extensions of Lipschitz mappings into a Hilbert space
- Understanding machine learning. From theory to algorithms
- Large margin classification using the perceptron algorithm
- Boosting a weak learning algorithm by majority
- Title not available (Why is that?)
- 10.1162/15324430260185600
- On variants of the Johnson–Lindenstrauss lemma
- On the Generalization Ability of On-Line Learning Algorithms
- Noise tolerant variants of the perceptron algorithm
- 10.1162/153244303321897681
- An algorithmic theory of learning: Robust concepts and random projection
- Learning Theory
- Sample Compression Schemes for VC Classes
- Algorithmic Learning Theory
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning the unlearnable
- PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification
- The implicit bias of gradient descent on separable data
Uses Software
This page was built for publication: On the perceptron's compression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2106618)