Understanding Sparse JL for Feature Hashing

From MaRDI portal
Publication:6315296

arXiv1903.03605MaRDI QIDQ6315296FDOQ6315296


Authors: Meena Jagadeesan Edit this on Wikidata


Publication date: 8 March 2019

Abstract: Feature hashing and other random projection schemes are commonly used to reduce the dimensionality of feature vectors. The goal is to efficiently project a high-dimensional feature vector living in mathbbRn into a much lower-dimensional space mathbbRm, while approximately preserving Euclidean norm. These schemes can be constructed using sparse random projections, for example using a sparse Johnson-Lindenstrauss (JL) transform. A line of work introduced by Weinberger et. al (ICML '09) analyzes the accuracy of sparse JL with sparsity 1 on feature vectors with small ellinfty-to-ell2 norm ratio. Recently, Freksen, Kamma, and Larsen (NeurIPS '18) closed this line of work by proving a tight tradeoff between ellinfty-to-ell2 norm ratio and accuracy for sparse JL with sparsity 1. In this paper, we demonstrate the benefits of using sparsity s greater than 1 in sparse JL on feature vectors. Our main result is a tight tradeoff between ellinfty-to-ell2 norm ratio and accuracy for a general sparsity s, that significantly generalizes the result of Freksen et. al. Our result theoretically demonstrates that sparse JL with s>1 can have significantly better norm-preservation properties on feature vectors than sparse JL with s=1; we also empirically demonstrate this finding.




Has companion code repository: https://github.com/mjagadeesan/sparsejl-featurehashing









This page was built for publication: Understanding Sparse JL for Feature Hashing

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6315296)