Robust probability updating

From MaRDI portal
Publication:282914

DOI10.1016/J.IJAR.2016.03.001zbMATH Open1381.60011arXiv1512.03223OpenAlexW2337103830MaRDI QIDQ282914FDOQ282914

Wouter M. Koolen, Thijs van Ommen, Peter D. Grünwald, Thijs E. Feenstra

Publication date: 12 May 2016

Published in: International Journal of Approximate Reasoning (Search for Journal in Brave)

Abstract: This paper discusses an alternative to conditioning that may be used when the probability distribution is not fully specified. It does not require any assumptions (such as CAR: coarsening at random) on the unknown distribution. The well-known Monty Hall problem is the simplest scenario where neither naive conditioning nor the CAR assumption suffice to determine an updated probability distribution. This paper thus addresses a generalization of that problem to arbitrary distributions on finite outcome spaces, arbitrary sets of `messages', and (almost) arbitrary loss functions, and provides existence and characterization theorems for robust probability updating strategies. We find that for logarithmic loss, optimality is characterized by an elegant condition, which we call RCAR (reverse coarsening at random). Under certain conditions, the same condition also characterizes optimality for a much larger class of loss functions, and we obtain an objective and general answer to how one should update probabilities in the light of new information.


Full work available at URL: https://arxiv.org/abs/1512.03223





Cites Work


Cited In (1)






This page was built for publication: Robust probability updating

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q282914)