Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization

From MaRDI portal
Publication:5281236

DOI10.1109/TIT.2010.2068870zbMATH Open1366.62071arXiv0809.0853MaRDI QIDQ5281236FDOQ5281236


Authors: Xuanlong Nguyen, Martin J. Wainwright, Michael Jordan Edit this on Wikidata


Publication date: 27 July 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: We develop and analyze M-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a non-asymptotic variational characterization of f-divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.


Full work available at URL: https://arxiv.org/abs/0809.0853







Cited In (56)





This page was built for publication: Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5281236)