Zoom in of a dashboard focusing on the "App Store" widget

Nobody Reads the Terms and Conditions: A Digital Advanced Directive Might Be Our Solution

Could Facebook know your menstruation cycle?

In a recent Op-ed Piece, “You Just Clicked Yes. But, Do you Know Terms and Conditions of that Health App?,” I proposed that a mix of factors have given rise to the need to regulate web-based health services and apps. Since most of these applications do not fall under the Health Insurance Portability and Accountability Act (HIPAA), few people actually read through the Terms and Conditions, and also, the explosive growth of web-based health applications, the need for solutions is dire.

The dangers are only increasing.

For example, Apple is facing a lawsuit over its app’s many privacy leaks. New York’s attorney general sued several  “wellness” tech companies, alleging questionable privacy-related practices. A recent study revealed that apps share your health data everywhere. Study after study shows that your data is being leaked, repackaged, and sold to companies like Facebook and others. The costs of clicking “yes” on Terms and Conditions—terms that you are unlikely to have fully read—are creating privacy nightmares. Yet, these problems are even more consequential when it comes to personal health data.  Tech companies collect and share deeply personal health data ranging from body weight to menstrual cycles.  23andMe promises to use your genetic information to provide information on your propensity for diseases. Even when your health data is supposedly de-identified, it can still be linked to you. Since the selling of this data is often a primary product of these companies, the wanton spreading of your health data across the internet is proliferating right alongside the rapid growth of this sector.

And thus, herein lays the real danger. What many consumers fail to realize is that the “free apps” that they use for their New Year’s weight loss resolution or to help book nearby yoga appointments are not actually free. You “pay” for these apps through providing your data, to which you have “assented” by clicking agree in their Terms and Conditions.

Could an app that “knows” our data preferences warn us when we use services whose terms and conditions violate our preferences?

I propose that there ought to be an independent data privacy application that allows users, separately, to select a broad range of preferences around the sharing of their data and information across all apps and web-based services. A so-called “running application,” which operates in the background of a phone or computer, the tool could serve as a kind of digital “Advanced Directive” regarding one’s data that could then be used in various digital contexts. Then, with preferences loaded in that application, it could warn users if the terms of an application that they use violate their own privacy selections. The tool would in no way replace the need for the user to read the T&C. Yet, such an approach could offer a powerful support towards this larger aim.

A Digital Advanced Directive (DAD) could be set up in myriad ways.

For example, such a directive could be managed and stored using an application. DADs could prompt users to update their preferences occasionally. It could be part of the initial preferences selection enabled in a mobile phone’s operating system. Its use could also be part of the regulatory solution that I proposed in the first Op-ed, with T&Cs prompting both users to agree to the terms and to use a publicly available app that can help them become aware of how their data is being used. Significant research and work would need to be done to figure out how to compel the public to make use of the tool and decisions would need to be made around who would develop, control, and fund the tool.

Could such an app help us, albeit in a limited way, with the mazes of T&C to which we mindlessly assent? Also, would such a solution change our privacy-related practices? Apparently, once informed about the reality of their app’s privacy practices, people may behave differently. In a 2015 study, researchers examined the effectiveness of “nudges” around their apps’ data sharing to help users realize their privacy risks. In the study, users were presented with attention-grabbing pop-up notifications. Once people were notified of these statistics, they often changed around their privacy selections. According to researchers, users were largely unaware of the extent to which their private data was being shared.

So, if it is true that once we truly realize just how much of our data is shared, we may make different decisions, then it becomes even more clear that T&Cs are a key cog in the modern mess that is health-data privacy. Too, as indicated in New York Times article that exposed this issue. Yet, tech companies are fully aware users do not read such terms. Tech companies exploit the “fiction” of consent to sell data as necessitated by their business models.

Importantly, what if the preferences in one’s DAD run up against the terms of your favorite application? At present, a refusal to “agree” means an inability to use the app in question. This is a discrete ethical issue on its own. Yet, an even more significant dilemma emerges as more and more basic kinds of healthcare access is managed through these web-based tools. Would a refusal to allow a company to share your health data with Facebook mean that you could not access your web-based health care provider?  As applications increasingly provide direct medical services ranging from consultations with psychiatrists to sexual health consultations, this issue brings about new questions.

Yet, DAD also represents something else. Rather than the current mode in which we hastily respond to social and ethical challenges brought about by technology, we must also invent tech that helps advance our ethical aims and desires proactively. Given the explosive growth of telemedicine and digital health systems, this moment in the history of medicine is replete with opportunity to build new tools to support important social aims amidst a radically changing healthcare context.

Indeed, our current digital moment is rife with risks. Yet, a broader view shows that this moment represents a powerful opportunity. Through robust and flexible regulatory oversight and digital tools that help users express their preferences,  the digital shift in health could become a little less treacherous.

 

Mark Robinson is a 2018-2019 Petrie-Flom Center Student Fellow. 

Mark Dennis Robinson

Mark Robinson earned his Masters in Bioethics at Harvard Medical School in 2019, with a project that explored the intersection of technology and ethics. A graduate of the University of Chicago, he also holds a PhD from Princeton University, where he held the Presidential Fellowship. In Summer 2019, Mark will join Georgia Institute of Technology as a visiting scholar. Mark is also the author of a forthcoming book, "The Market in Mind: How Financialization Is Shaping Neuroscience, Translational Medicine, and Innovation in Biotechnology," about the ethical and scientific impacts of the increasing financialization of neuroscience (and of translational science and medicine in general) that will be published by MIT Press in 2019. Mark's fellowship project, "Ethics for a Frail Subject: Systems, Technology, and a Theory of Global Moral Impairment," considered how bioethics might be designed around an understanding of human beings as "impaired subjects" that accounts for biological impediments to human morality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.