Negligent Failure to Prevent Suicide in the Age of Facebook Live

By Shailin Thomas

In 2016, Facebook unveiled a new tool that allows users to post live streams of video directly from their phones to the social media platform. This feature — known as “Facebook Live” — allows friends and followers to watch a user’s videos  as she films them. Originally conceptualized as a means of sharing experiences like concerts or vacations in real time, the platform was quickly adopted for uses Facebook likely didn’t see coming. In 2016, Lavish Reynolds used Facebook Live to document the killing of her boyfriend, Philando Castile, by the Minneapolis police, sparking a national debate surrounding police brutality and racial disparities in law enforcement. Recently, another use for Facebook Live has arisen — one that Facebook neither foresaw nor wants: people have been using Facebook Live as a means of broadcasting their suicides.

This tragic adaptation of the Facebook Live feature has put Facebook in a tough spot. It wants to prevent the suicides its platform is being used to document — and just a few weeks ago it rolled out real-time tools viewers of Live videos can use to identify and reach out to possible suicide victims while they’re filming — but it’s often too late by the time the video feed is live. Accordingly, Facebook is focusing its efforts at identifying those at risk of suicide before the situation becomes emergent. It currently has teams designing artificial intelligence algorithms for identifying users who may be at risk for suicide. These tools would scan Facebook users’ content, flagging individuals that have warning signs of self-harm or suicide in their posts.

While the technology is still in development, it’s possible — even likely — that the algorithm will surpass human physicians’ diagnostic abilities. A machine-learning algorithm could theoretically analyze thousands of variables over millions of data points to pinpoint with unprecedented specificity the exact pattern of Facebook activity most likely to result in suicide. The ability to accurately identify those who are on the path to self-harm well before any actions are taken would be incredibly valuable. It could enable Facebook to reach out with resources and support early enough to impact a user’s trajectory. Friends or family could be notified in sufficiently severe cases, and authorities could be alerted in the direst circumstances. But, as the adage goes, “with great power comes great responsibility.” Facebook taking on this new quasi-diagnostic initiative — despite its potential as a force for good — raises legal questions about what exactly Facebook owes to those its algorithm identifies or overlooks.

Once Facebook suspects that a user is at risk to commit suicide, what duties does it owe that individual? Could Facebook be found negligent if it doesn’t do enough to prevent self-harm? While states have allowed negligence claims for failure to prevent suicide, those claims are traditionally limited to psychiatric professionals.[1] Liability is premised on the existence of a relationship with the individual, the specialized expertise and experience of the professional, and the foreseeability of the self-harm. Cf. Bogust v. Iverson, 102 N.W.2d 228, 230 (Wis. Sup. Ct. 1960). Facebook’s suicide identification algorithm represents an interesting case because the social network could, in theory, meet the requisite criteria. The relationship Facebook has with its users is something never before encountered by the law, and in some ways it is more intimate than currently recognized special relationships like those between psychiatrists and patients. Facebook knows more about many of its users than even their closest friends and family members. Privy to many of its users public and private communications, it almost certainly has more intimate knowledge about its users than many psychiatric professionals have about their patients.

While relationships with Facebook are not currently characterized as caregiver relationships, which many recognized special relationships are. See, e.g., Dezort v. Hinsdale, 342 N.E.2d 468, 472-73 (111. App. Ct. 1976). As online platforms take on more important roles in our lives this could very well change. Jonathan Zittrain has characterized online platforms as on a spectrum between tool and friend. To make this distinction between tool and friend concrete, it’s useful to consider the evolution of Google. Google started as a tool — a relatively simple means of cataloging the contents of the distributed Web, making it available to users as a searchable index. As Google has innovated its user experience, though, it has moved closer to the friend side of the spectrum. Now, instead of simply providing a searchable index, Google purports to answer questions through its knowledge graph functionality and provide personalized recommendations based on the thousands of data points it has about individual users and users like them. Through these innovations, Google has transformed itself from a digital “white pages” to more of a concierge — adding a normative layer of advice over its positive, informational functionality. As Facebook starts to use its analytic capabilities to make increasingly personalized judgments and recommendations — people will come to rely on it more as a friend than a tool. In particular, as Facebook implements suicide detection features, we may come to rely on it to provide resources to those it knows are considering self-harm or to alert their friends and family. While not firmly within the doctrine as it currently stands, Facebook’s relationship with its users is certainly expanding in ways that capture many hallmarks of the special relationship necessary for tort liability.

The second element — expertise and experience — also poses a novel question in the Facebook context. Facebook is a tech company — it does not purport to have any specialized knowledge in diagnosing individuals for risk of self-harm. However, due to the nature of machine-learning algorithms, Facebook engineers with no experience in psychiatric diagnostics could build an algorithm that’s better at predicting an individual’s chance of attempting suicide than the average psychiatrist. Does the requirement for relevant experience hold if the algorithm can diagnose at rates higher than physicians with formal expertise? Would a superior diagnostic rate satisfy the requirement even if neither the algorithm nor its creators had any formal training? It’s not entirely clear what the relevant inquiry would be in this context. If it’s not one of formalistic credentialing, but rather whether Facebook has sufficient means to know with confidence that a user is in danger of self-harm, the element may very well be satisfied.

The potential accuracy and precision of Facebook’s suicide prevention algorithm also speaks to the foreseeability of the self-harm. Traditionally, liability is reserved for instances where the suicide was reasonably foreseeable by the party in question. See, e.g., Sneider v. Hyatt Corp., 390 F. Supp. 976 (N.D. Ga. 1975). Facebook’s artificial intelligence algorithm for identifying those at risk of self-harm will be tested extensively before implementation, and will likely have a precise, known error rate. Thus, if the algorithm flags a user, Facebook will know with an unprecedented degree of specificity exactly how likely it is the user will go on to commit suicide. It will be hard for Facebook to argue that it could not foresee a user’s suicide if its algorithm is more accurate than psychiatrists and the company knows the exact percentage probability that a flagged user will go on to self-harm.

This is not to say that Facebook will clearly be assuming liability for failure to prevent suicides if it implements artificially intelligent tools designed to identify those at risk of self-harm. It is simply to say that as Facebook takes increasingly aggressive steps to mitigate the prevalence of self-harm on social media it will interact with existing law in novel, complex ways. The law hasn’t before encountered an entity with the distant yet deeply personal relationship Facebook has with such a large segment of society. As we come to rely on the platform for more and more social functions — including, perhaps, identifying and assisting members in our communities at risk of suicide — it may find itself entangled in unanticipated legal doctrines originally designed for different entities in different situations.  Facebook’s efforts to prevent suicide among its users are laudable, and should continue. But as social media platforms expand in novel directions, both the companies and the jurisdictions in which they operate should consider the ways in which law designed for the analog world might present surprising obstacles to beneficial initiatives in the digital one.

[1] There have been cases, however, where they duty has been extended to other types of relationships, including, for example, a hotel’s relationship with one of its guests. See Sneider v. Hyatt Corp., 390 F. Supp. 976 (N.D. Ga. 1975).

One thought to “Negligent Failure to Prevent Suicide in the Age of Facebook Live”

  1. It is indeed tragic when stuff like suicide, homicide and other horrible things happens. Facebook has shown time after time that it doesn’t have a grasp of what’s going on in the real world, and when people are trying to be vokal, they are more or less sweeping it away….

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.