Bucketing Coding and Information Theory for the Statistical High-Dimensional Nearest-Neighbor Problem
From MaRDI portal
Publication:5281372
DOI10.1109/TIT.2010.2050814zbMATH Open1368.68209arXiv0810.4182MaRDI QIDQ5281372FDOQ5281372
Authors: Moshe Dubiner
Publication date: 27 July 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Consider the problem of finding high dimensional approximate nearest neighbors, where the data is generated by some known probabilistic model. We will investigate a large natural class of algorithms which we call bucketing codes. We will define bucketing information, prove that it bounds the performance of all bucketing codes, and that the bucketing information bound can be asymptotically attained by randomly constructed bucketing codes. For example suppose we have n Bernoulli(1/2) very long (length d-->infinity) sequences of bits. Let n-2m sequences be completely independent, while the remaining 2m sequences are composed of m independent pairs. The interdependence within each pair is that their bits agree with probability 1/2<p<=1. It is well known how to find most pairs with high probability by performing order of n^{log_{2}2/p} comparisons. We will see that order of n^{1/p+epsilon} comparisons suffice, for any epsilon>0. Moreover if one sequence out of each pair belongs to a a known set of n^{(2p-1)^{2}-epsilon} sequences, than pairing can be done using order n comparisons!
Full work available at URL: https://arxiv.org/abs/0810.4182
Recommendations
- Demystifying Fixed <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math> </inline-formula>-Nearest Neighbor Information Estimators
- On computing nearest neighbors with applications to decoding of binary linear codes
- Efficient multivariate entropy estimation via \(k\)-nearest neighbour distances
- The information bottleneck and geometric clustering
- Geometric k-nearest neighbor estimation of entropy and mutual information
- Near-Optimal Sample Compression for Nearest Neighbors
- Dimensionality-dependent generalization bounds for \(k\)-dimensional coding schemes
Cited In (10)
- Identifying an unknown code by partial Gaussian elimination
- Title not available (Why is that?)
- Generalization of BJMM-ISD using May-Ozerov nearest neighbor algorithm over an arbitrary finite field \(\mathbb{F}_q\)
- Detecting the large entries of a sparse covariance matrix in sub-quadratic time
- ForestDSH: a universal hash design for discrete probability distributions
- Finding correlations in subquadratic time, with applications to learning parities and the closest pair problem
- Asymptotics and improvements of sieving for codes
- Title not available (Why is that?)
- A new coding-based algorithm for finding closest pair of vectors
- Explicit correlation amplifiers for finding outlier correlations in deterministic subquadratic time
This page was built for publication: Bucketing Coding and Information Theory for the Statistical High-Dimensional Nearest-Neighbor Problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5281372)