Ensemble Estimators for Multivariate Entropy Estimation
From MaRDI portal
Publication:5346463
DOI10.1109/TIT.2013.2251456zbMATH Open1364.94254arXiv1203.5829OpenAlexW2005560667WikidataQ39755627 ScholiaQ39755627MaRDI QIDQ5346463FDOQ5346463
Authors: Kumar Sricharan, Dennis Wei, Alfred O. III Hero
Publication date: 8 June 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size as the dimension of the samples increases. In particular, the rate is often glacially slow of order , where is a rate parameter. Examples of such estimators include kernel density estimators, -nearest neighbor (-NN) density estimators, -NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of . Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample.
Full work available at URL: https://arxiv.org/abs/1203.5829
Cited In (5)
- Entropy estimation via uniformization
- Efficient multivariate entropy estimation via \(k\)-nearest neighbour distances
- Statistical estimation of the Shannon entropy
- Divergence measures estimation and its asymptotic normality theory in the discrete case
- Statistical estimation of conditional Shannon entropy
This page was built for publication: Ensemble Estimators for Multivariate Entropy Estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5346463)