FastText.zip: Compressing text classification models
From MaRDI portal
Publication:94187
DOI10.48550/ARXIV.1612.03651arXiv1612.03651MaRDI QIDQ94187FDOQ94187
Authors: Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, Tomas Mikolov
Publication date: 12 December 2016
Abstract: We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our approach typically requires two orders of magnitude less memory than fastText while being only slightly inferior with respect to accuracy. As a result, it outperforms the state of the art by a good margin in terms of the compromise between memory usage and accuracy.
Cited In (1)
This page was built for publication: FastText.zip: Compressing text classification models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q94187)