Photograph of a doctor in blue scrubs overlaid with an illustration of a padlock

Anonymity in the Time of a Pandemic: Privacy vs. Transparency

By Cansu Canca

As coronavirus cases increase worldwide, institutions keep their communities informed with frequent updates—but only up to a point. They share minimal information such as number of cases, but omit the names of individuals and identifying information.

Many institutions are legally obligated to protect individual privacy, but is this prohibition of transparency ethically justified?

Some even go a step further and ask you, an individual in a community, to choose privacy over transparency as well. Harvard—alongside with  Yale, Chicago, and Northwestern—requests you to “Please Respect Individuals’ Privacy. Anonymity for these individuals remains paramount. Please respect their privacy—even if you believe you know who they are—so they can focus completely on their health” (emphasis in original).

But do you have an ethical obligation to do so at the time of a pandemic?

Here are the facts:

In the face of this unprecedented pandemic, are there compelling ethical arguments for giving paramount importance to individual privacy?

Risk of Harm

One argument could be a utilitarian harm-benefit balance.

Harvard’s request for privacy implies that transparency might put unnecessary stress on the infected (“so they can focus completely on their health”). But what stress? Standard concerns about “outing” are public attention, blame for catching the disease, or losing support from one’s social circle, any of which might also push individuals to avoid testing in the first place.

None of these fit the global experience with COVID-19. Unlike most other infectious diseases, COVID-19 does not have any stigma attached. (I will ignore the racist comments made to Asians and Italians since they fall into the category of pure racism rather than stigmatization of infected individuals.)

Infected individuals are not considered reckless or part of a certain marginalized or disadvantaged group. In fact, beloved celebrities, politicians, and even Harvard’s own president came forward as they and their families have fallen ill one after another showing COVID-19’s non-discriminatory course.

In short, the risks from transparency aren’t serious.

By contrast, the risk of harm from privacy could be very serious.

As the virus continues to spread globally, the WHO and other experts recommend testing, isolation of confirmed cases, and contact tracing of suspected cases. Identifying suspected cases is crucial; even if testing is not available, suspected cases can self-isolate.

With privacy “paramount,” traditional contact tracing relies solely on the confirmed case’s memory of the past 2 weeks to identify suspected cases. Such memory can be spotty, especially during a time when an individual is already experiencing tremendous stress and anxiety. If community members know identifying details of confirmed cases, individuals could (self-)identify potential carriers and take extra caution. An editorial in The Harvard Crimson is right to point out that transparency won’t help with asymptomatic or otherwise non-tested carriers, but the article’s further step of inferring uselessness from this deficit is a non sequitur.

Autonomy

In any event, the consequentialist argument probably does not capture the spirit behind claims that privacy is paramount.

It is more likely that this claim—like many other privacy claims—is inspired by Kantian notions.

In rough outline, the argument would be that every individual is entitled to make decisions regarding their own life, including which personal information to share and with whom. Privacy is both an exercise of our autonomous decision-making and a tool to give us personal space to make other autonomous decisions.

But it comes at a cost—precisely a cost on autonomy, the autonomy of others. To the extent that private information is relevant for others’ decision-making, their autonomy is impeded by their lack of access to the information. Therefore, one person’s privacy can be another’s limitation of autonomy.

In the case of the COVID-19 pandemic, privacy of an infected individual does indeed affect others’ autonomous decision-making. If you or a member of your family has been in contact with the infected person, this is relevant information. If you had it, you would use it to keep yourself, your family, and your community safe. If autonomy is paramount, then in this case, individual privacy might not be.

The Ethical Decision

These arguments against transparency for the sake of privacy at the time of a pandemic apply to the ethical judgement of individual members of a community, institutions, and even infected individuals.

Is it ethically justifiable for the infected individual to request privacy in this case? Using the reasoning above, I would argue that it is not.

Do you, as a member of a community, have an ethical obligation to respect their privacy and not disclose information? You don’t.

And is it ethically justifiable to prohibit transparency in the extreme circumstances of a deadly pandemic where there is significant risk of harm, where vulnerable populations are most at risk, and where individual autonomy is restricted by a privacy-induced lack of information? No, it is not.

One way to get around this tension between privacy and transparency could be through technology. By supplementing traditional contact tracing with digital contact tracing measures, the relevant information could be shared without revealing personal information (as in the case of Singapore’s tracing app).

This is a promising option that we must explore to reduce harm and to protect individual autonomy. But we must be careful about the design of these tools and put in place safeguards around their use. If not, they could end up being an even bigger threat to our autonomy both immediately and in the long run.

Cansu Canca

Cansu Canca, Ph.D. is a philosopher and the founder and director of the AI Ethics Lab. She leads teams of computer scientists, philosophers, and legal scholars to provide ethics analysis and guidance to researchers and practitioners. She holds a Ph.D. in philosophy specializing in applied ethics. Her area of work is in the ethics of technology and population-level bioethics. Prior to the AI Ethics Lab, she was a lecturer at the University of Hong Kong, and a researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, Osaka University, and the World Health Organization. She tweets @ccansu

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.