Universal consistency of the k-NN rule in metric spaces and Nagata dimension

From MaRDI portal
Publication:5140348

DOI10.1051/PS/2020018zbMATH Open1455.62123arXiv2003.00894OpenAlexW3006969417MaRDI QIDQ5140348FDOQ5140348


Authors: Benoit Collins, Sushma Kumari, Vladimir G. Pestov Edit this on Wikidata


Publication date: 15 December 2020

Published in: ESAIM: Probability and Statistics (Search for Journal in Brave)

Abstract: The k nearest neighbour learning rule (under the uniform distance tie breaking) is universally consistent in every metric space X that is sigma-finite dimensional in the sense of Nagata. This was pointed out by C'erou and Guyader (2006) as a consequence of the main result by those authors, combined with a theorem in real analysis sketched by D. Preiss (1971) (and elaborated in detail by Assouad and Quentin de Gromard (2006)). We show that it is possible to give a direct proof along the same lines as the original theorem of Charles J. Stone (1977) about the universal consistency of the k-NN classifier in the finite dimensional Euclidean space. The generalization is non-trivial because of the distance ties being more prevalent in the non-euclidean setting, and on the way we investigate the relevant geometric properties of the metrics and the limitations of the Stone argument, by constructing various examples.


Full work available at URL: https://arxiv.org/abs/2003.00894




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Universal consistency of the \(k\)-NN rule in metric spaces and Nagata dimension

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5140348)