Minimax Estimation of Functionals of Discrete Distributions

From MaRDI portal
Publication:2978652

DOI10.1109/TIT.2015.2412945zbMATH Open1359.62104DBLPjournals/tit/JiaoVHW15arXiv1406.6956OpenAlexW1989151402WikidataQ47555295 ScholiaQ47555295MaRDI QIDQ2978652FDOQ2978652


Authors: Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman Edit this on Wikidata


Publication date: 28 April 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: We propose a general methodology for the construction and analysis of minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the alphabet size S is unknown and may be comparable with the number of observations n. We treat the respective regions where the functional is "nonsmooth" and "smooth" separately. In the "nonsmooth" regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the "smooth" regime, we apply a bias-corrected Maximum Likelihood Estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing two important cases: the entropy H(P)=sumi=1Spilnpi and Falpha(P)=sumi=1Spialpha,alpha>0. We obtain the minimax L2 rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity nasympS/lnS for entropy estimation. We also show that the sample complexity for estimating Falpha(P),0<alpha<1 is nasympS1/alpha/lnS, which can be achieved by our estimator but not the MLE. For 1<alpha<3/2, we show the minimax L2 rate for estimating Falpha(P) is (nlnn)2(alpha1) regardless of the alphabet size, while the L2 rate for the MLE is n2(alpha1). For all the above cases, the behavior of the minimax rate-optimal estimators with n samples is essentially that of the MLE with nlnn samples. We highlight the practical advantages of our schemes for entropy and mutual information estimation. We demonstrate that our approach reduces running time and boosts the accuracy compared to existing various approaches. Moreover, we show that the mutual information estimator induced by our methodology leads to significant performance boosts over the Chow--Liu algorithm in learning graphical models.


Full work available at URL: https://arxiv.org/abs/1406.6956







Cited In (24)





This page was built for publication: Minimax Estimation of Functionals of Discrete Distributions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2978652)