Facebook Should ‘First Do No Harm’ When Collecting Health Data

By Mason Marks

Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.

Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.

Before I discuss how this practice is analogous to medical diagnosis, I will explain why we should care. Companies sort consumers into health-related categories primarily to deliver targeted advertising. If Facebook can label people as “diabetic,” then it can target them with ads for products that diabetics are likely to buy. This application of EMD seems benign, but there is cause for concern. People with illnesses and disabilities are often susceptible to exploitation, and it is easy to imagine scenarios in which they are harmed by targeted advertising. Targeted ads that prey on people’s unique vulnerabilities can exacerbate self-injurious behavior and harm society by generating negative externalities such as increased healthcare costs. For example, Facebook’s algorithms could identify anorexic teens and target them with ads for diet pills when they feel most vulnerable. During Mark Zuckerberg’s Senate hearing last week, Senator Christopher Coons asked him whether such targeting could occur. Coons also asked whether Facebook’s algorithms would show ads for casinos to people with gambling problems or liquor ads to alcoholics. Zuckerberg did not provide a clear answer to these questions.

The following day, during Zuckerberg’s appearance before the House Energy and Commerce Committee, Representatives David McKinley and Gus Bilirakis asked him about Facebook ads placed by illegal pharmacies selling opioids without requiring prescriptions. Algorithms may learn to display these ads to people with opioid use disorders, which could contribute to the opioid crisis.

Consumers can also be harmed if their health information is shared with data brokers or other third parties. For example, if Facebook labels users as alcoholics, and employers or insurers obtain this information, then the users could face discrimination when they apply for jobs or insurance. During his testimony, Zuckerberg asserted that Facebook does not sell user data. However, it remains unclear whether Facebook sells information that it infers from user data (such as EMD).

When doctors diagnose patients, they gather information about lifestyle, family, symptoms, and medications. They combine this information with test results and feed it into algorithms they learned during their training. In medical school and residency, doctors memorize hundreds of diagnostic algorithms. Does the patient have cough? If yes, branch right in the decision tree. If not, branch left. Medical diagnosis essentially boils down to navigating a large set of branching decision trees. When companies mine for EMD, the process is similar. They collect data from consumers and feed it into machine learning algorithms that have been trained to identify medical conditions. The result is a diagnosis, which is really nothing more than an estimation of a person’s health based on probabilities.

All U.S. states have laws that prohibit unlicensed individuals from practicing medicine. In California, the unlicensed practice of medicine consists of unlicensed diagnosis or treatment, and violations are punishable by fines of up to $10,000 and imprisonment for up to one year. The state Business and Professions Code defines “diagnosis” as “any undertaking by any method, device, or procedure whatsoever, and whether gratuitous or not, to ascertain or establish whether a person is suffering from any physical or mental disorder” [emphasis added]. This description sounds a lot like the act of collecting data and sorting consumers into health-related categories.

Even if Facebook is not violating state laws when sorting people into health-related categories, at the very least, it is acting like a medical diagnostician, and it should be treated like one. Professor Jack Balkin argues that online platforms including Facebook should be considered digital information fiduciaries. Classic examples of fiduciaries are professionals such as doctors and lawyers. People seek counsel from doctors because they possess specialized knowledge and abilities that their clients lack, and these asymmetries create opportunities for abuse. As a result, society imposes legal duties on physicians; they have a duty of care requiring them to act reasonably to avoid harming patients, they have a duty of confidentiality to protect patients’ information, and they have a duty of loyalty requiring them to act in their patients’ best interest. According to Balkin, the relationships between consumers and online platforms like Facebook are characterized by similar asymmetries of knowledge and ability, and society should impose fiduciary duties on the platforms.

Balkin points out that the duties of digital information fiduciaries like Google and Facebook are less expansive than those of classic fiduciaries. We can imagine a continuum of information fiduciaries with doctors and lawyers on the extreme end of the spectrum (having strong fiduciary duties), companies that collect very little consumer data on the other end (having few if any fiduciary duties), and social media companies somewhere in between.

I argue that when social media companies mine health data to diagnose consumers, their duties shift to the extreme end of the spectrum, and they should be treated as classic fiduciaries. Like physicians, they should not use data to diagnose people unless they have the requisite medical training and licenses. Doing so would violate their duty of care. If they collect or infer health information, they should maintain its confidentiality. Failing to do so would violate their duty of confidentiality. Most importantly, social media companies should not use health data to exploit their users for financial gain. Doing so would violate their duty of loyalty.

As Congress contemplates how to regulate social media companies, it should acknowledge the subtle difference between raw user data and the information that can be inferred from it. It should recognize that social media companies mine EMD, and when they do so, they act like medical diagnosticians and should be treated as classic fiduciaries. It is often said that the primary responsibility of doctors is to ‘first do no harm.’ When social media companies choose to traffic in sensitive health data, they should be held to similar standards.

Mason Marks

Dr. Mason Marks is a Senior Fellow and Project Lead on the Project on Psychedelics Law and Regulation at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. He is an Assistant Professor of Law at the University of New Hampshire Franklin Pierce School of Law and an affiliated fellow at the Information Society Project at Yale Law School. View his full bio at masonmarks.com.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.