Quantization via Empirical Divergence Maximization
From MaRDI portal
Publication:4574192
DOI10.1109/TSP.2012.2217136zbMATH Open1393.94320arXiv1111.1738MaRDI QIDQ4574192FDOQ4574192
Authors: Michael A. Lexa
Publication date: 18 July 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Abstract: Empirical divergence maximization (EDM) refers to a recently proposed strategy for estimating f-divergences and likelihood ratio functions. This paper extends the idea to empirical vector quantization where one seeks to empirically derive quantization rules that maximize the Kullback-Leibler divergence between two statistical hypotheses. We analyze the estimator's error convergence rate leveraging Tsybakov's margin condition and show that rates as fast as 1/n are possible, where n equals the number of training samples. We also show that the Flynn and Gray algorithm can be used to efficiently compute EDM estimates and show that they can be efficiently and accurately represented by recursive dyadic partitions. The EDM formulation have several advantages. First, the formulation gives access to the tools and results of empirical process theory that quantify the estimator's error convergence rate. Second, the formulation provides a previously unknown derivation for the Flynn and Gray algorithm. Third, the flexibility it affords allows one to avoid a small-cell assumption common in other approaches. Finally, we illustrate the potential use of the method through an example.
Full work available at URL: https://arxiv.org/abs/1111.1738
Statistical aspects of information-theoretic topics (62B10) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Cited In (2)
This page was built for publication: Quantization via Empirical Divergence Maximization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4574192)