On the Optimal Boolean Function for Prediction Under Quadratic Loss
From MaRDI portal
Publication:5358564
DOI10.1109/TIT.2017.2686437zbMATH Open1370.94615arXiv1607.02381OpenAlexW2602115942MaRDI QIDQ5358564FDOQ5358564
Ofer Shayevitz, Nir Weinberger
Publication date: 21 September 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Abstract: Suppose is obtained by observing a uniform Bernoulli random vector through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between and a Boolean function could be, and conjectured that the maximum is attained by a dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in a sequential prediction of under logarithmic loss, given . In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function - the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single sequence of Boolean functions that is simultaneously (asymptotically) optimal at all noise levels.
Full work available at URL: https://arxiv.org/abs/1607.02381
Boolean functionsquadratic loss functionPinsker's inequalitylogarithmic loss functionsequential prediction
This page was built for publication: On the Optimal Boolean Function for Prediction Under Quadratic Loss
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5358564)