Image of a young woman sitting in her bedroom in workout clothes checking a smart watch health app

Do You Know the Terms and Conditions of Your Health Apps? HIPAA, Privacy and the Growth of Digital Health

As more health care is being provided virtually through apps and web-based services, there is a need to take a closer look at whether users are fully aware of what they are consenting to, as it relates to their health information.

There needs to be a re-evaluation of how health apps obtain consent. At the same time, digital health offers an important opportunity to embolden privacy practices in digital platforms. We ought to use this important opportunity.

The growth of telemedicine and digital health applications is explosive. These apps, like many others, simply ask users to click “agree” to a consent form. Yet, there is an “elephant” inside your Terms and Conditions. The truth is that we don’t read them. According to research, few read through such terms. Given this fact and the increasingly digital context of healthcare provision, there is much to worry about.

It is commonly understood that no one reads Terms and Conditions fully; you would need to quit your job to have the time to do so. According to a recent New York Times article, “The average person would have to spend 76 working days reading all of the digital privacy policies they agree to in the span of a year.” Calling the consent process a “legal fiction,” the article outlines the problems inherent in disclosure processes. From Amazon and Spotify to brick-and-mortar companies now digitizing their processes and products, the implications for consumers are now wide-ranging.

Yet, while the ubiquity is an issue for everyday apps, health-related apps are of special concern.

Health apps track users’ health information. Apps such as MyFitnessPal are designed to track workouts, while a plethora of new applications enable a consumer to order specific prescriptions. Additionally, apps such as Pocket Doctor are suggesting possible diagnoses on the basis of inputs from patients. Even everyday technologies such as the Apple Watch collect information that counts as health data. Newer applications offer more concretely medical services, such as Talkspace, which allows users to communicate with a professional via video, voice, or text messaging. Roman, a snazzy new tech company, says it enables men to get sexual health medications “quickly and discreetly.” A health app available through ADA Health is allowing patients to confess suicidal thoughts.

The market for health-related applications has been nothing short of explosive. The growth of this sector (recently named “Mhealth”) reached $28 billion in 2018 and is projected to grow to a market value of over $100 billion by 2023.

The problems of privacy are even more consequential for health-related apps and are exacerbated by their rapid growth. Health applications collect and share data ranging from biometric data, such as your heart rate, which fitness applications often track to genetic dispositions to certain diseases. For genetic information, privacy concerns concern not only the user herself, but also family members who share genetics. Imagine an insurance company finds out that a person has an unusual risk for a disease based on genetic data purchased from a family lineage company? In the case of genetics, regulations such as GINA (The Genetic Information Nondiscrimination Act) prevents this use, but other risks still exist. In short, the sensitive nature of the data may render traditional digital consent processes inadequate.

In the U.S., health information and privacy are the purview of several regulations, including the Health Insurance Portability and Accountability Act (HIPAA). Yet, HIPAA may not help. The first problem is that it does not cover most of the technology companies where your data is increasingly housed.

The second way in which HIPAA does not help is in fact that health-related data is often sold to other companies, where it is combined with other kinds of data, and it is at this point where such data is more powerful and also poses a greater risk.

Advertisers often use integrated data, created from various sources, to provide customized advertisements. Law enforcement recently used proprietary genetic information companies in order to use genetic information to help a criminal prosecution. It is how data travels from one context and use to another where ethical issues compound and risks double. Lastly, because a company buying this data may not be a proper health company, they are, arguably, not subject at all to policies that govern traditional healthcare providers.

Given the limits of HIPAA and the growth of virtual medicine and health apps, what is an optimal solution?

I argue that there needs to be a new regulatory structure to manage the health data increasingly collected by companies. This regulation needs to also address the fact that data is often purchased by non-health actors, companies are often be acquired by other companies subject to other or no regulations (a special issue for foreign acquisitions of U.S. companies), and lastly, the fact that many risks related to health data occur outside of a single platform and thus render platform-specific policies limited.

Additionally, new laws must contend with the inadequacy of terms and conditions as a means of establishing consent. Laws should not conflate terms of service common to consumer applications with consent necessitated in healthcare settings. Instead, there should be specific consent processes for Mhealth innovations. At a minimum, such apps should provide a separate, highly readable disclosure form that outlines in accessible and clear language the specific health-related data that will be collected.

The need for regulating consent in digital environments is also related to the special importance of consent in medicine. The long history of consent in medicine indexes clear ethical issues and obligations related to patients’ ability to have some say in how their data is being used. Data around a person’s sexual health or mental health, using the example of Talkspace and Roman, is especially sensitive and potential misuse, potentially catastrophic. In other words, health data needs to be treated as a special class of data.

Thus, there is a need for a new regulatory structure specific to digital health and mHealth.

 

Mark Robinson is a 2018-2019 Petrie-Flom Center Student Fellow. 

Mark Dennis Robinson

Mark Robinson earned his Masters in Bioethics at Harvard Medical School in 2019, with a project that explored the intersection of technology and ethics. A graduate of the University of Chicago, he also holds a PhD from Princeton University, where he held the Presidential Fellowship. In Summer 2019, Mark will join Georgia Institute of Technology as a visiting scholar. Mark is also the author of a forthcoming book, "The Market in Mind: How Financialization Is Shaping Neuroscience, Translational Medicine, and Innovation in Biotechnology," about the ethical and scientific impacts of the increasing financialization of neuroscience (and of translational science and medicine in general) that will be published by MIT Press in 2019. Mark's fellowship project, "Ethics for a Frail Subject: Systems, Technology, and a Theory of Global Moral Impairment," considered how bioethics might be designed around an understanding of human beings as "impaired subjects" that accounts for biological impediments to human morality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.