Turning probabilities into expectations (Q760117)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Turning probabilities into expectations |
scientific article; zbMATH DE number 3883408
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Turning probabilities into expectations |
scientific article; zbMATH DE number 3883408 |
Statements
Turning probabilities into expectations (English)
0 references
1984
0 references
Suppose that you specify your prior probability that an unknown quantity \(\theta\) lies in each member of a disjoint partition of the values of \(\theta\). What does this imply about your prior mean and variance for \(\theta\), and your posterior mean and variance, given sample information? We provide a partial answer by modifying a suggestion of \textit{C. F. Manski} [ibid. 9, 59-65 (1981; Zbl 0451.62004)] for incorporating the cost of specification of prior probabilities into the analysis of decision problems. This modification leads to a simple explicit solution in the problem of estimating the mean of a distribution, with quadratic loss, in the class of linear functions of the sample, and this solution is related to the problem of turning probabilities into expectations.
0 references
midrisk
0 references
linear Bayes rules
0 references
elicitation of subjective probabilities
0 references
prior probability
0 references
prior mean and variance
0 references
posterior mean and variance
0 references
cost of specification of prior probabilities
0 references
estimating the mean
0 references
quadratic loss
0 references
linear functions
0 references
turning probabilities into expectations
0 references
0 references
0 references
0.8612948
0 references
0.8535215
0 references
0.85259926
0 references