Asymptotic optimality of myopic information-based strategies for Bayesian adaptive estimation

From MaRDI portal
Publication:5963513

DOI10.3150/14-BEJ670zbMATH Open1388.62239arXiv1506.05483WikidataQ110236375 ScholiaQ110236375MaRDI QIDQ5963513FDOQ5963513

Janne V. Kujala

Publication date: 22 February 2016

Published in: Bernoulli (Search for Journal in Brave)

Abstract: This paper presents a general asymptotic theory of sequential Bayesian estimation giving results for the strongest, almost sure convergence. We show that under certain smoothness conditions on the probability model, the greedy information gain maximization algorithm for adaptive Bayesian estimation is asymptotically optimal in the sense that the determinant of the posterior covariance in a certain neighborhood of the true parameter value is asymptotically minimal. Using this result, we also obtain an asymptotic expression for the posterior entropy based on a novel definition of almost sure convergence on "most trials" (meaning that the convergence holds on a fraction of trials that converges to one). Then, we extend the results to a recently published framework, which generalizes the usual adaptive estimation setting by allowing different trial placements to be associated with different, random costs of observation. For this setting, the author has proposed the heuristic of maximizing the expected information gain divided by the expected cost of that placement. In this paper, we show that this myopic strategy satisfies an analogous asymptotic optimality result when the convergence of the posterior distribution is considered as a function of the total cost (as opposed to the number of observations).


Full work available at URL: https://arxiv.org/abs/1506.05483





Cites Work


Cited In (1)






This page was built for publication: Asymptotic optimality of myopic information-based strategies for Bayesian adaptive estimation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5963513)