Universal Bayes consistency in metric spaces

From MaRDI portal
Publication:2054482

DOI10.1214/20-AOS2029zbMATH Open1486.62324arXiv1906.09855OpenAlexW3204278298MaRDI QIDQ2054482FDOQ2054482


Authors: Steve Hanneke, Sivan Sabato, Roi Weiss, Aryeh Kontorovich Edit this on Wikidata


Publication date: 3 December 2021

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: We extend a recently proposed 1-nearest-neighbor based multiclass learning algorithm and prove that our modification is universally strongly Bayes-consistent in all metric spaces admitting any such learner, making it an "optimistically universal" Bayes-consistent learner. This is the first learning algorithm known to enjoy this property; by comparison, the k-NN classifier and its variants are not generally universally Bayes-consistent, except under additional structural assumptions, such as an inner product, a norm, finite dimension, or a Besicovitch-type property. The metric spaces in which universal Bayes consistency is possible are the "essentially separable" ones -- a notion that we define, which is more general than standard separability. The existence of metric spaces that are not essentially separable is widely believed to be independent of the ZFC axioms of set theory. We prove that essential separability exactly characterizes the existence of a universal Bayes-consistent learner for the given metric space. In particular, this yields the first impossibility result for universal Bayes consistency. Taken together, our results completely characterize strong and weak universal Bayes consistency in metric spaces.


Full work available at URL: https://arxiv.org/abs/1906.09855




Recommendations




Cites Work


Cited In (12)





This page was built for publication: Universal Bayes consistency in metric spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2054482)