Phone with social media icons - instagram, facebook, and twitter.

Regulating Out of the Social Media Health Crisis

By Bailey Kennedy

If something changes the pathways in our brains and damages our health — and if it does so to Americans on a vast scale — it should be regulated as a threat to public health.

It’s time for our regulators to acknowledge that social media fits this description.

Social media poses an active health threat to many of its users, in a way that is akin to other regulated substances: it has been tied to a variety of harmful health outcomes, including depression. It has also become increasingly clear that social media can be addictive.

Even if it is a behavioral rather than a substantive addiction, with only indirect links to physical health, the high number of Americans who exhibit some degree of social media addiction is concerning.

Inasmuch as social media presents us with a public health crisis, the American government should consider potential regulatory steps to address it.

Read More

Person looking at a Fitbit watch in a Best Buy store

Reviewing Health Announcements at Google, Facebook, and Apple

By Adriana Krasniansky

Over the past several days, technology players Google, Apple, and Facebook have each reported health-related business news. In this blog post, we examine their announcements and identify emerging ethical questions in the digital health space.

On Nov. 1, Google announced plans to acquire smartwatch maker Fitbit for $2.1 billion in 2020, subject to regulatory approval. The purchase is expected to jumpstart the production of Google’s own health wearables; the company has already invested at least $40 million in wearable research, absorbing watchmaker Fossil’s R&D technology in January 2019.

Read More

Blue background that reads "facebook" with a silhouette of a person looking down on his phone in front

On Social Suicide Prevention, Don’t Let the Perfect be the Enemy of the Good

In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.

But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist. Read More