artistic portrayal of fingerprints of different colors

Why You Must Stop Using the Word, “Privacy” Now

In a recent New York Times Op-Ed, essayist Charlie Warzer suggests that the problem of privacy in modern life is that it’s too complex.

His diagnosis? “Privacy is Too Big to Understand.” While his piece contains important points, he’s wrong. While it is true that the many ways that our data is shared (and how) boggles the mind, the issue is not that privacy is “complex.”

The problem is the term itself. I believe we should stop using the term, “privacy.”  

Before delving further, it is crucial to explain why this matters. It is quite clear that privacy is under threat. Privacy breaches are everywhere: Amazon’s Alexa product recently recorded a couple’s conversation and sent a transcript of the conversation to a contact. In 2013, Yahoo reported that hackers accessed private data for nearly 3 billion users, exposing passwords, birthdates, and other information. Yet, privacy breaches around health are even more consequential. 7000 patients were affected by a 2017 data breach at a New York hospital.

Beyond breaches lay the problem of data. Tech companies like Google, Facebook, and even startups all make money using data, including health data. This world of selling, transferring, and bundling digital health data turns this issue into a full-blown privacy nightmare. One woman using the app, Ovia, to track her pregnancy found that her employer tracked this data, paying health data firms to access this information. Facebook buys tons of health data, including information about menstruation. Given the growth of digital platforms for health and the ubiquity of non-health data used for health purposes, the explosive growth of the data created, collected, and shared about your health creates a particular problem for privacy.

Yet, the real problem for privacy lay not merely in the ubiquity of data sharing, but more so in how data is used in future scenarios that cannot be predicted. For example, we assume that Target protects our banking info when we buy clothes. But no one predicts that Target’s credit card vendor will one day be hacked. You signed and understood the privacy policy when you signed up for company X, not knowing that they would one day be acquired by company Y, which swiftly trashes company X’s policies. Data breaches, especially, occur precisely when one data set is combined with some future data set that no one could have predicted would exist or be usable. It is this very process that allows information that is presently de-identified, which you consider private, to become entirely identifiable in the future. Poof. Privacy, gone.

What this shows is that privacy is, in many ways, an always-ticking time bomb. In short, your current data, which may actually be somewhat “private” presently, is likely to be become de-privatized because of scenarios that cannot be predicted. Between breaches, leaks, data selling and buying, company acquisition, and the always expanding sets of data that are born every day and which then become newly combinable with other data, it is impossible to confer privacy as traditionally understood. Unfortunately, most discussions and strategizing about privacy, which often focuses on single companies and platforms, miss this reality. While no one can predict the future, we can predict that your privacy will be undone at some point, in some way.

So, what are we to do?

For many, the solution always lay with “public education.” Scholars, Tech companies, and consumer advocates alike all exclaim that people need to be made aware. Educate consumers! they say. Education as well as proposals for laws, apps, and more, dominate our thinking. Yet, what if one of the biggest solutions lay right under our nose?  

If it is true that our privacy can no longer be guaranteed – and this is indeed increasingly the case — what if we stopped using the word privacy altogether? What if, instead, we replaced it with the term, “temporary-privacy” as a way to encode in the name the impermanence of all data? Would it impact our decisions? Our understanding? Our expectations?

To be clear, changing a mere word will not magically force the public into appropriate understanding and vigilance. We still need regulations, “privacy” policies, increased oversight, and help from tech companies. However, shifting our language would also solve a key problem in the debate over privacy, and that is our tacit expectation that privacy actually exists. It is an error that is partly the fault of the word itself. Research shows that people remain “surprised” at how frequently apps share their information. This is a problem. We should not be surprised by the fact that our private data is now everywhere. In a world where all things are increasingly knowable and everyone identifiable is it simply incorrect (and perhaps more, unethical) to talk about privacy in the same way. For both patients and providers; tech firms and senators, switching to an understanding of data privacy as inherently short-lived is a must.

We should switch to the language of “temporary-privacy” because the temporariness or impossibility of data privacy is, well, true. Yet, there are other ways to enroll language here: What if, instead of the term “temporary privacy” we include a time estimate next to all statements regarding your data: What if you signed a “privacy form” which estimated atop the form an “estimated-2-years” of privacy for your data based on estimates of breaches, hacks, the like?

Some may counter that we should not foist responsibility here onto the public. For them, this “buyer beware” mindset – one which, by the way, dovetails the libertarian underpinnings of Tech thinking–, merely reinforces a culture in which Tech builds willy-nilly and the risks fall to society.  Too, getting the public to accept as inevitable the loss of privacy, may cause companies to engage in even more exploitation. Lastly, increased public acceptance may prevent privacy lawsuits against companies. These problems are significant. Yet, while we should address those issues, we should also hack away at the mind-trick in the term privacy.

On the horizon is an even more significant ethical issue. As digital services and systems are increasingly necessary for modern daily life, we may soon encounter a world where handing over sensitive data is no longer a choice. If accessing vital services such as health care or basic transportation requires using digital tools, and these tools require you to “agree” to have your data exploited, then we will soon see a society in which we trade privacy in order to live.

Yet, might language change help here too? If we gradually work towards a cultural shift in which all privacy is understood to be temporary, flimsy, and partial – with language helping to push this cultural shift – it may not only intervene in our obliviousness around privacy, but it could also help shift public dialogue that could inspire society-level change.

Language matters. In a world where ghosts of privacy past distract us from the ghost of privacy present, we must radically transform our understanding, expectations, and obligations regarding “privacy.” By forcing radical clarity around our increasingly flimsy, temporary-privacy, we can bring about a new future of privacy for the public.

 

Mark Robinson is a 2018-2019 Petrie-Flom Center Student Fellow. 

Mark Dennis Robinson

Mark Robinson earned his Masters in Bioethics at Harvard Medical School in 2019, with a project that explored the intersection of technology and ethics. A graduate of the University of Chicago, he also holds a PhD from Princeton University, where he held the Presidential Fellowship. In Summer 2019, Mark will join Georgia Institute of Technology as a visiting scholar. Mark is also the author of a forthcoming book, "The Market in Mind: How Financialization Is Shaping Neuroscience, Translational Medicine, and Innovation in Biotechnology," about the ethical and scientific impacts of the increasing financialization of neuroscience (and of translational science and medicine in general) that will be published by MIT Press in 2019. Mark's fellowship project, "Ethics for a Frail Subject: Systems, Technology, and a Theory of Global Moral Impairment," considered how bioethics might be designed around an understanding of human beings as "impaired subjects" that accounts for biological impediments to human morality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.