In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.
But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist.
With its suicide prediction algorithm, Facebook is doing something magnificent. It has the real potential to mitigate one of the most intractable sources of human suffering in the world. Of course it’s not perfect. I would like to see real efficacy data based on actual suicide outcomes, as much as Marks surely does. I’m sure Facebook would, too. They obviously want the program to work. But while waiting for data, we should celebrate, not denigrate, Facebook’s efforts — and there is no reason to subject them to bureaucratic and extremely restrictive healthcare regulations, nor insist that the program be opt-in.
Much of Marks’s argument arises from the conflation of medical treatment for potentially suicidal individuals and proactive identification of people in imminent danger of self-harm. The former is medicine, but the latter is basically a law enforcement function that inevitably and rightly involves the police. Suicide prevention is not only a question of the long term management of mental health. Many people who commit or attempt suicide don’t present themselves at therapists’ offices looking for treatment at all, and people often affirmatively obscure their suicidal thoughts from healthcare providers. Some people go looking for mental health treatment, but some people don’t. And even those who seek treatment may decide to harm themselves in a moment of despair without warning their doctor. What this means is that suicide prevention is often a matter of physically preventing people from killing themselves: talking people down from bridges, getting people who are hanging themselves to the hospital, and tackling people out of the way of oncoming trains.
These are emergency situations, not medical diagnoses, and the challenge is not only about making sure people are candid with their doctors (a justification for medical privacy), but finding and affirmatively reaching the people who are in imminent danger of suicide out there in the world. The police are involved not to harm or incarcerate or oppress mentally ill people, but because someone has to respond and they are the best situated public employees to handle to high stakes, life-or-death, highly physical and potentially violent emergencies.
Indeed, police departments are increasingly seeing suicide intervention as part of their primary mandate. And this is what Facebook’s algorithm aspires to do: not diagnose people or provide treatment, but identify people in crisis situations who need law enforcement (and then, when the dust has settled, medical) intervention.
It’s worth considering how this problem is currently dealt with—people are trying really hard, but it’s a crapshoot. Friends, family, neighbors, roommates, or random bystanders call a suicide hotline or police dispatch when they’re worried about someone, and the police conduct a welfare check. When I worked for the Cornell Police, this was a huge percentage of what the department did from night to night. I am not aware of any instances of dangerous escalation, and I am confident that there are people who are alive today who otherwise would not have been. For an entirely ad hoc system, we’re all trying our best.
But of course, a citizen reporting system like this is going to involve many false positives and false negatives, each of which has all the potentially tragic consequences Marks points out with respect to Facebook-induced welfare checks. Friends and family simply might not notice that someone is suicidal, or the person may alienate the people who care most about them. Or, even if people are worried, they might not report it because they don’t want to get involved, or don’t think it’s that big a deal, or any other number of reasons. On the other hand, most welfare checks probably do not involve imminent risk of suicide, and sometimes a college-aged child just doesn’t want to talk to their parents or forgets to call.
This, really, is what Facebook’s algorithm can help with. It can be better at identifying people in crisis as compared to semi-random peer reports. As Marks acknowledges, it can do this because it has access to valuable behavioral data that family and friends often don’t. I have no idea how effective the algorithm is at the moment. I am sure Marks is right that there will be false positives and false negatives. But there already are. The algorithm doesn’t have to be all that much better to be an improvement.
Viewed in this light, then, it is clear why subjecting Facebook to HIPAA-type regulation would be a mistake. The algorithm is supplementing the efforts of ordinary bystanders, not doctors. Family and friends are not subject to HIPAA when they call the police. Facebook shouldn’t be for doing the same thing better.
Finally, Marks argues that Facebook should require people affirmatively opt-in to the monitoring of their accounts for suicide risk, on the grounds that this would better respect autonomy. This completely misses the point. While the empirics could go either way, as a matter of logic it seems probable that the people who opt-out are much more likely to be suicidal than the people who opt-in.
But there’s a deeper issue here, about right to autonomy Marks is trying to safeguard. Is there a right to kill yourself without interference? By opting out, does one reserve their right to suicide without anyone trying to talk them down?
That’s not a right you have. Your life matters to other people and they care about you. The debate about euthanasia aside, no one thinks it’s a violation of privacy or autonomy to call the police if someone might hurt themselves. It’s an act of heroism. It’s a civic duty. It’s a consequence inescapable of living in a society that cares, whoever you are, about whether you live or die.
James Toomey is a 2018-2019 Petrie-Flom Center Student Fellow.