Collect Sensitive Survey Responses Privately – Mathematics ∩ Programming

Posted on

This is a draft of a chapter from my current book, Practical Math for Programmers: A Guided Tour of Math in Production Software.

Point: Determine an aggregate statistic on a sensitive issue, when survey respondents don’t believe their answers will be kept secret.

The solution:

import random

def respond_privately(true_answer: bool) -> bool:
    '''Respond to a survey with plausible deniability about your answer.'''
    be_honest = random.random() < 0.5
    random_answer = random.random() < 0.5
    return true_answer if be_honest else random_answer

def aggregate_responses(responses: List[bool]) -> Tuple[float, float]:
    '''Return the estimated fraction of survey respondents that have a truthful
    Yes answer to the survey question.
    '''
    yes_response_count = sum(responses)
    n = len(responses)
    mean = 2 * yes_response_count / n - 0.5
    # Use n-1 when estimating variance, as per Bessel's correction.
    variance = 3 / (4 * (n - 1))
    return (mean, variance)

In the late 1960s, most abortions were illegal in the United States. Daniel G. Horvitz, a statistician at the Research Triangle Institute in North Carolina and a leader in survey design for the social sciences, was tasked with estimating the number of women in North Carolina who were having illegal abortions. The goal was to inform state and federal policy makers of abortion statistics, many of which went unreported, even when performed legally.

The obstacles were obvious. As Horvitz put it, “a prudent woman would not disclose to a stranger the fact that she participated in a crime for which she might be prosecuted.” [Abernathy70] This resulted in a strong bias in the survey responses. Similar issues have plagued investigations into illegal activities of all kinds, including drug abuse and violent crime. Lack of knowledge of basic statistics on illegal behavior led to various misconceptions, such as that abortions were not frequently sought.

Horvitz worked with biostatisticians James Abernathy and Bernard Greenberg to test a new method to overcome this hurdle, without violating the respondent’s privacy or ability to plausibly deny illegal behavior. The method, called random responsewas invented by Stanley Warner in 1965, a few years earlier. [Warner65] Warner’s method was a bit different from what we show in this tip, but Warner’s method and the sample code above use the same strategy of adding randomization to the survey.

The mechanism, as shown in the code above, requires responders to start by tossing a coin. If heads, they answer the tricky question honestly. If tails, they toss a second coin to determine how to answer the question: tails giving a “yes” answer, tails giving a “no” answer. Naturally, raffles are private and controlled by the sponsor. And so if a respondent answers “Yes” to the question, they can plausibly claim that the “Yes” was determined by the coin, thus preserving their privacy. The figure below depicts this process in diagram form.

A branch diagram showing the process a survey respondent goes through to record their response.

Another way to describe the result is to say that each respondent’s answer is a single bit of information that is inverted with a probability of 1/4. It’s halfway between two extremes on the privacy/accuracy trade-off curve. The first extreme is a “perfectly honest” response, where the bit is never flipped and all information is retained. The second extreme has the bit flipped with probability 1/2, which is equivalent to ignoring the question and choosing your answer completely at random, losing all information in the aggregated answers. From this perspective, the aggregated survey responses can be viewed as a digital signal, and the privacy mechanism adds noise to this signal.

It remains to determine how to recover the aggregated signal from these noisy responses. In other words, the interviewer cannot know an individual’s true answer, but he can, with a little extra work, estimate statistics about the underlying population by correcting for statistical bias. This is possible because the randomization is well controlled. The expected fraction of “Yes” responses can be written in terms of the true fraction of “Yes” responses, and thus the true fraction can be solved. In this case, where the random coin is fair, this formula is as follows (where mathbf{P} means “the probability of”).

displaystyle mathbf{P}(textup{answer yes}) = frac{1}{2} mathbf{P}(textup{true answer yes}) + frac{1}{4}

And so we solve for mathbf{P}(textup{True answer yes})

displaystyle mathbf{P}(textup{Yes true}) = 2 mathbf{P}(textup{Yes}) - frac{1}{2}

We can replace the true probability mathbf{P}(textup{Answer Yes}) above with our fraction of “Yes” responses from the survey, and the result is an estimate hat{p} of mathbf{P}(textup{True answer yes}). This estimate is unbiased, but has additional variance—beyond the usual variance caused by selecting a finite random sample from the population of interest—introduced by the randomization mechanism.

With a little effort, one can calculate that the variance of the estimate is

displaystyle textup{Var}(hat{p}) = frac{3}{4n}

And via Chebyshev’s inequality, which limits the probability that an estimator is far from its expectation, we can create a confidence interval and determine the sample sizes needed. More specifically, the estimate hat{p} has an additive error at most q with probability at most textup{Var}(hat{p}) / q^2. This implies that for a confidence of 1 Cit takes at least n geq 3 / (4 cq^2) samples. For example, to obtain an error of 0.01 with a confidence level of 90% (c=0.1), it takes 7,500 responses.

Horvitz’s randomization mechanism did not use coin flips. Instead, they used an opaque box with red or blue colored balls which the respondent, who was in the same room as the surveyor, shook and privately revealed a random color through a small window opposite the surveyor. The statistical principle is the same. Horvitz and his associates surveyed the women about their views on the mechanism’s privacy protections. When asked if their friends would honestly answer a direct question about abortion, more than 80% thought their friends would lie or were unsure. [footnote: A common trick in survey methodology when asking someone if they would be dishonest is to instead ask if their friends would be dishonest. This tends to elicit more honesty, because people are less likely to uphold a false perception of the moral integrity of others, and people also don’t realize that their opinion of their friends correlates with their own personal behavior and attitudes. In other words, liars don’t admit to lying, but they think lying is much more common than it really is.] But 60% were convinced there was no trick to the randomisation, while 20% were unsure and 20% thought there was a trick. This suggests that many people believed that Horvitz’s randomization mechanism provided the necessary safety guarantees to answer honestly.

Horvitz’s survey was a resounding success, both for random response as a method and for measuring abortion prevalence. [Abernathy70] They estimated the abortion rate at around 22 per 100 conceptions, with a distinct racial bias – minorities were twice as likely as whites to have abortions. Comparing their results to an earlier national study from 1955 – the so-called Arden House estimate – which gave a range of between 200,000 and 1.2 million abortions per year, Horvitz’s team more accurately estimated that there were 699,000 abortions in 1955 in the United States, with a reported standard deviation of about 6,000, less than one percent. For 1967, the year of their study, they estimated the number at 829,000.

Their estimate was widely referenced in the flurry of abortion laws and court cases that followed due to growing public interest in the topic. For example, it is cited in the 1970 California Supreme Court opinion for the case Ballard vs. Andersonwhich concerned the question of whether a minor needed parental consent to benefit from an otherwise legal abortion. [Ballard71, Roemer71] He was also quoted in amici curiae briefs submitted to the Supreme Court of the United States in 1971 for Roe vs. Wade, the famous case that struck down most US laws making abortion illegal. One such brief was filed jointly by leading women’s rights organizations across the country, such as the National Organization for Women. Quoting Horvitz for this paragraph, he wrote, [Womens71]

While the realities of law enforcement, social and public health issues posed by abortion laws have been openly discussed […] it is only over a period not exceeding the last ten years that a fact appears undeniable, although statistically unverifiable. There are at least one million illegal abortions in the United States every year. Indeed, studies indicate that while local law still has qualification requirements, relaxing the law has not substantially decreased the number of women who obtain illegal abortions.

It is unclear how the authors arrived at this one million figure (Horvitz’s estimate was 20% lower for 1967), or what they meant by “statistically unverifiable”. This may be a misinterpretation of the random response technique. In any event, the random response played a crucial role in providing a basis for political debate.

Despite Horvitz’s success and decades of additional research on crime, drug use, and other sensitive topics, random response mechanisms have been poorly applied. In some cases, the desired randomization is inextricably complex, such as when a continuous random number is required. In these cases, a manual randomization mechanism is too complex for a respondent to use accurately. Trying to use software-assisted devices can help, but can also produce mistrust in the interviewee. See [Rueda16] for additional discussion of these pitfalls and existing software packages to assist in using random response. See [Fox16] for an analysis of the statistical differences between the variety of methods used between 1970 and 2010.

In other contexts, analogs to the random response may not cause the intended effect. In the 1950s, Utah used death by firing squad as capital punishment. To avoid bad conscience from the shooters, one of the five snipers was randomly given a blank, providing him with plausible denial that he knew he had fired the fatal shot. However, this approach failed on two counts. First, once a shot was fired, the sniper could tell if the bullet was real based on the recoil. Second, a 20% chance of a blank was not enough to deter a guilty shooter from intentionally missing. During the execution of Elisio Mares in 1951, all four real bullets missed the convict’s heart, hitting his chest, stomach and hip. He died, but it was neither painlessly nor instantly.

Among the many lessons that can be learned from sloppy execution, one is that randomization mechanisms must take into account both the psychology of the participants and the severity of a failure.

References

@book{Fox16,
  title = {{Randomized Response and Related Methods: Surveying Sensitive Data}},
  author = {James Alan Fox},
  edition = {2nd},
  year = {2016},
  doi = {10.4135/9781506300122},
}

@article{Abernathy70,
  author = {Abernathy, James R. and Greenberg, Bernard G. and Horvitz, Daniel G.
            },
  title = {{Estimates of induced abortion in urban North Carolina}},
  journal = {Demography},
  volume = {7},
  number = {1},
  pages = {19-29},
  year = {1970},
  month = {02},
  issn = {0070-3370},
  doi = {10.2307/2060019},
  url = {https://doi.org/10.2307/2060019},
}

@article{Warner65,
  author = {Stanley L. Warner},
  journal = {Journal of the American Statistical Association},
  number = {309},
  pages = {63--69},
  publisher = {{American Statistical Association, Taylor & Francis, Ltd.}},
  title = {Randomized Response: A Survey Technique for Eliminating Evasive
           Answer Bias},
  volume = {60},
  year = {1965},
}

@article{Ballard71,
  title = {{Ballard v. Anderson}},
  journal = {California Supreme Court L.A. 29834},
  year = {1971},
  url = {https://caselaw.findlaw.com/ca-supreme-court/1826726.html},
}

@misc{Womens71,
  title = {{Motion for Leave to File Brief Amici Curiae on Behalf of Women’s
           Organizations and Named Women in Support of Appellants in Each Case,
           and Brief Amici Curiae.}},
  booktitle = {{Appellate Briefs for the case of Roe v. Wade}},
  number = {WL 128048},
  year = {1971},
  publisher = {Supreme Court of the United States},
}

@article{Roemer71,
  author = {R. Roemer},
  journal = {Am J Public Health},
  pages = {500--509},
  title = {Abortion law reform and repeal: legislative and judicial developments
           },
  volume = {61},
  number = {3},
  year = {1971},
}

@incollection{Rueda16,
  title = {Chapter 10 - Software for Randomized Response Techniques},
  editor = {Arijit Chaudhuri and Tasos C. Christofides and C.R. Rao},
  series = {Handbook of Statistics},
  publisher = {Elsevier},
  volume = {34},
  pages = {155-167},
  year = {2016},
  booktitle = {Data Gathering, Analysis and Protection of Privacy Through
               Randomized Response Techniques: Qualitative and Quantitative Human
               Traits},
  doi = {https://doi.org/10.1016/bs.host.2016.01.009},
  author = {M. Rueda and B. Cobo and A. Arcos and R. Arnab},
}

Leave a Reply

Your email address will not be published.