Reading policies for joins: an asymptotic analysis

From MaRDI portal
Publication:2467117

DOI10.1214/105051606000000646zbMATH Open1163.90789arXivmath/0703019OpenAlexW3101031228MaRDI QIDQ2467117FDOQ2467117

Nariankadu D. Shyamalkumar, Ralph P. Russo

Publication date: 18 January 2008

Published in: The Annals of Applied Probability (Search for Journal in Brave)

Abstract: Suppose that mn observations are made from the distribution mathbfR and nβˆ’mn from the distribution mathbfS. Associate with each pair, x from mathbfR and y from mathbfS, a nonnegative score phi(x,y). An optimal reading policy is one that yields a sequence mn that maximizes mathbbE(M(n)), the expected sum of the (nβˆ’mn)mn observed scores, uniformly in n. The alternating policy, which switches between the two sources, is the optimal nonadaptive policy. In contrast, the greedy policy, which chooses its source to maximize the expected gain on the next step, is shown to be the optimal policy. Asymptotics are provided for the case where the mathbfR and mathbfS distributions are discrete and phi(x,y)=1or0 according as x=y or not (i.e., the observations match). Specifically, an invariance result is proved which guarantees that for a wide class of policies, including the alternating and the greedy, the variable M(n) obeys the same CLT and LIL. A more delicate analysis of the sequence mathbbE(M(n)) and the sample paths of M(n), for both alternating and greedy, reveals the slender sense in which the latter policy is asymptotically superior to the former, as well as a sense of equivalence of the two and robustness of the former.


Full work available at URL: https://arxiv.org/abs/math/0703019





Cites Work



   Recommendations





This page was built for publication: Reading policies for joins: an asymptotic analysis

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2467117)