A probabilistic view on predictive constructions for Bayesian learning

From MaRDI portal
Publication:6407801

arXiv2208.06785MaRDI QIDQ6407801FDOQ6407801


Authors: Patrizia Berti, Emanuela Dreassi, Fabrizio Leisen, Pietro Rigo, Luca Pratelli Edit this on Wikidata


Publication date: 14 August 2022

Abstract: Given a sequence X=(X1,X2,ldots) of random observations, a Bayesian forecaster aims to predict Xn+1 based on (X1,ldots,Xn) for each nge0. To this end, in principle, she only needs to select a collection sigma=(sigma0,sigma1,ldots), called ``strategy" in what follows, where sigma0(cdot)=P(X1incdot) is the marginal distribution of X1 and sigman(cdot)=P(Xn+1incdotmidX1,ldots,Xn) the n-th predictive distribution. Because of the Ionescu-Tulcea theorem, sigma can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence X is determined. The strategies concern generalized P'olya urns, random change points, covariates and stationary sequences.













This page was built for publication: A probabilistic view on predictive constructions for Bayesian learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6407801)