A binary unrelated-question RRT model accounting for untruthful responding (Q2278636)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: A binary unrelated-question RRT model accounting for untruthful responding |
scientific article; zbMATH DE number 7140471
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | A binary unrelated-question RRT model accounting for untruthful responding |
scientific article; zbMATH DE number 7140471 |
Statements
A binary unrelated-question RRT model accounting for untruthful responding (English)
0 references
5 December 2019
0 references
Warner's method of eliciting information from respondents in a survey of a sensitive character $C$ is based on a Randomized Response Technique (RRT). This consists of two related statements (questions) referring to possession of $C$ and possession of complement of $C$. The respondent is asked to select one of the questions using a randomization device and then answer truthfully with an `yes' or `no' answer. An improvement over this is the model due to \textit{B. G. Greenberg} et al. [``The unrelated question randomized response model: theoretical framework'', J. Am. Stat. Assoc. 64, 520--539 (1969)] where the second question is unrelated and non sensitive. However, Social Desirability Bias (SDB) occurs when the respondents give an untruthful reply to sensitive questions so as to satisfy social desirability criterion. Several variations to this model are suggested by researchers to tackle the SDB. In this paper, the authors propose a two question model in which the first one uses Greenberg et al.'s model to estimate the proportion of respondents who trust the randomization technique. The second question uses a second randomization device to elicit respondent's trust in the randomization technique and if they do not trust, they are asked to answer the unrelated question. Define the following probabilities: $\pi_C$ and $\pi_y$ (known) be the prevalence of the sensitive character $C$ and the unrelated character respectively. Let $\pi_b$ be the known prevalence of the other unrelated character. Next, let $p_b$ and $p_c$ be the probabilities of receiving a question about trust in the first step and a sensitive question in the second step respectively. Furthermore, let \(\pi_a\) be the probability that the respondent truthfully answers the sensitive question. The probability of `yes' response to question $i$, $i = 1,2$ is given by the equations (4.1) and (4.2) of the paper: \[ P_{y_1} = p_b \pi_a + ( 1- p_b ) \pi_b \] and \[ P_{y_2} = \pi_a [p_c \pi_c + ( 1- p_c ) \pi_y] + (1-\pi_a) \pi_y \] Solving these, the authors obtain the estimates of $\pi_C$ and $\pi_a$. The variance of the estimate of $\pi_a$ is given by $P_{y_1} ( 1- P_{y_1}) / n p_b^2$. Then, a Taylor's approximation to first order for the estimate of $\pi_C$ is used to obtain its variance. Simulation results follow showing that the extra layer of precaution in the proposed model also reduces the time and effort taken in the training of respondents prior to the survey.
0 references
model efficiency
0 references
optional randomized response models
0 references
unrelated question model
0 references
untruthful responding
0 references
randomized response technique (RRT)
0 references
social desirability bias (SDB)
0 references
0 references
0 references
0.8820815682411194
0 references
0.8717859387397766
0 references
0.8717859387397766
0 references
0.8691144585609436
0 references
0.8676754236221313
0 references