Empirical Bayes rules for selecting good populations (Q792031)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Empirical Bayes rules for selecting good populations |
scientific article |
Statements
Empirical Bayes rules for selecting good populations (English)
0 references
1983
0 references
A problem of selecting populations better than a control is considered. Assume that \(\pi_ 1,...,\pi_ k\) are k populations and \(X_ i\) is a random observation for a certain characteristic of \(\pi_ i\). Assume that \(X_ i\sim U(O,\theta_ i)\), where \(\theta_ i\) is unknown for 1\(\leq i\leq k\). Let \(\theta_ 0\) be a control parameter. \(\pi_ i\) is defined to be a good population if \(\theta_ i>\theta_ 0\) and to be a bad population if \(\theta_ i\leq \theta_ 0.\) Let \(\Theta =\{\theta =(\theta_ 1,...,\theta_ k)| \theta_ i>0\) for all 1\(\leq i\leq k\}\). For any \(\theta\in \Theta\), let \(A(\theta)=\{i| \theta_ i>\theta_ 0\}\) and \(B(\theta)=\{i| \theta_ i\leq \theta_ 0\}\). Then \(A(\theta)\) (\(B(\theta))\) is the set of indices of good (bad) populations. The goal is to select all the good populations and reject the bad ones. The problem was formulated as follows: (1) Let \(H=\{S| S\subset \{1,2,...,k\}\}\) be the action space. When we take action S, we say \(\pi_ i\) is good if \(i\in S\) and \(\pi_ i\) is bad if \(i\not\in S\). (2) The loss function is given by \(L(\theta,S)=L_ 1\sum_{i\in A(\theta)}(\theta_ i-\theta_ 0)+L_ 2\sum_{i\in B\cap S}(\theta_ 0-\theta_ i).\) (3) Let d(\(G(\theta))\) be an unknown prior distribution on \(\Theta\). (4) Let \((\theta_{i1},Y_{i1}),...,(\theta_{in},Y_{in})\) be pairs of random variables from \(\pi_ i\) and \(Y_{ij}| \theta_{ij}\sim U(0,\theta_{ij})\) for 1\(\leq i\leq k\), 1\(\leq j\leq n\). Let \(U_ j=(Y_{1j},...,Y_{kj})\), then \(Y_ j\) denotes the previous jth observations from \(\pi_ 1,...,\pi_ k\). (5) Let \(X=(x_ 1,...,x_ k)\) be the present observations and \(f(x| \theta)=\prod^{k}_{i=1}(1/\theta_ i)I_{(0,\theta_ i)}(x_ i)\). (6) Let \(D=\{\delta | \delta:X\to A\) is measurabl\(e\}\), then \(r(G)=\inf_{\delta \in D}r(G,\delta)\) is the minimum Bayes risk. The decision rules \(\{\delta_ n(x;Y_ 1,...,Y_ n)\}^{\infty}_{n=1}\) are said to be empirical Bayes (E.B.) relative to G if \[ r_ n(G,\delta_ n)=\int_{X}E\int_{\Theta}L(\theta,\delta_ n(X;Y_ 1,...,Y_ n))f(x| \theta)dG(\theta)dx\to r(G)\quad as\quad n\to \infty. \] In this paper, the authors defined a sequence of decision rules for the case that \(\theta_ 0\) is known and another sequence for the case that \(\theta_ 0\) is unknown. They showed that these two sequences are both empirical Bayes. The results were also generalized to the case that \(C_ i(\theta)^{-1}=\int^{\theta}_{0}P_ i(x)dx\) and \(f_ i(x_ i| \theta)=P_ i(x)C_ i(\theta)I_{(0,\theta)}(x_ i)\) is the density function of \(X_ i\). Some Monte Carlo study was presented to show how fast the derived empirical Bayes rules converge.
0 references
asymptotically optimal
0 references
truncation parameter
0 references
selecting populations better than a control
0 references
empirical Bayes
0 references
Monte Carlo study
0 references