On Bayes risk lower bounds

From MaRDI portal
Publication:2953644

zbMATH Open1429.62046arXiv1410.0503MaRDI QIDQ2953644FDOQ2953644


Authors: Xi Chen, Adityanand Guntuboyina, Yuchen Zhang Edit this on Wikidata


Publication date: 5 January 2017

Abstract: This paper provides a general technique for lower bounding the Bayes risk of statistical estimation, applicable to arbitrary loss functions and arbitrary prior distributions. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk, but also characterizes the fundamental limit of any estimator given the prior knowledge. Our bounds are based on the notion of f-informativity, which is a function of the underlying class of probability measures and the prior. Application of our bounds requires upper bounds on the f-informativity, thus we derive new upper bounds on f-informativity which often lead to tight Bayes risk lower bounds. Our technique leads to generalizations of a variety of classical minimax bounds (e.g., generalized Fano's inequality). Our Bayes risk lower bounds can be directly applied to several concrete estimation problems, including Gaussian location models, generalized linear models, and principal component analysis for spiked covariance models. To further demonstrate the applications of our Bayes risk lower bounds to machine learning problems, we present two new theoretical results: (1) a precise characterization of the minimax risk of learning spherical Gaussian mixture models under the smoothed analysis framework, and (2) lower bounds for the Bayes risk under a natural prior for both the prediction and estimation errors for high-dimensional sparse linear regression under an improper learning setting.


Full work available at URL: https://arxiv.org/abs/1410.0503




Recommendations





Cited In (19)





This page was built for publication: On Bayes risk lower bounds

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2953644)