image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

 

How does social suicide prediction work?

As we go about our daily routines, we leave behind digital traces or “breadcrumbs” reflecting where we’ve been and what we’ve done. Companies use AI to analyze these traces and infer health information, which is used for targeted advertising and algorithmic decision-making. For instance, Facebook’s AI scans user posts for words and phrases it believes are correlated with suicidal thoughts and stratifies the posts into risk categories.

For posts deemed “high risk,” the company may notify police, who then perform “wellness checks” at users’ homes. In 2017, Facebook announced that its system had prompted over 100 wellness checks in one month. Its affiliate Crisis Text Line, a text-based counseling service targeted at children and teens, reports sending police and other first responders to users’ homes over 11,500 times.

At first glance, social suicide prediction seems like a win-win proposition, allowing online platforms to benefit users and their families. However, social suicide predictions emerge from a black box of algorithms that are protected as trade secrets. Unlike medical suicide prediction research, which undergoes ethics review by institutional review boards and is published in academic journals, the methods and outcomes of social suicide prediction remain confidential. We don’t know whether it is safe or effective.

When companies engage in suicide prediction, numerous dangers arise, including privacy risks. Because most companies predicting suicide are not covered entities under HIPAA, their predictions can be shared with third-parties without consumer knowledge or consent. Transferring or selling suicide prediction data to advertisers, data brokers, and insurance companies can promote discrimination against consumers who are labeled suicidal.

Advertisers and data brokers may argue that the collection and sale of suicide predictions constitutes protected commercial speech under the First Amendment, and they might be right. In Sorrell v. IMS Health, the US Supreme Court struck down a Vermont law restricting the sale of pharmacy records containing doctors’ prescribing habits. The Court reasoned that the law infringed the First Amendment rights of data brokers and drug makers because it prohibited them from purchasing the data while allowing it to be shared for other uses. This opinion may threaten any future state laws limiting the sale of suicide predictions. Such laws must be drafted with this case in mind, allowing sharing of suicide predictions only for a narrow range of purposes such as research (or prohibit it completely).

In addition to threatening consumer privacy, social suicide prediction poses risks to consumer safety and autonomy. Due to the lack of transparency, it is unknown how often wellness checks result in involuntary hospitalization, which deprives people of liberty and may do more harm than good. In the short term, hospitalization can prevent suicide. However, people are at high risk for suicide shortly after being released from hospitals. Thus, civil commitments could paradoxically increase the risk of suicide.

Facebook has deployed its system in nearly every region in which it operates, except in the European Union. In some countries, attempted suicide is a criminal offense (e.g., in Singapore, where Facebook maintains its Asia-Pacific headquarters). In those countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration, illustrating how social suicide prediction is analogous to predictive policing.

In the US, the Fourth Amendment protects people and their homes from warrantless searches.  However, under exigent circumstances doctrine, police may enter homes without warrants if they reasonably believe entry is necessary to prevent physical harm, such as stopping a suicide. Nevertheless, it may be unreasonable to rely on opaque AI-generated suicide predictions to circumvent Fourth Amendment protections when no information regarding their accuracy is publicly available.

Underrepresented minorities and other vulnerable groups may be disproportionately impacted by social suicide predictions and wellness checks. According to Crisis Text Line, 5 percent of its texters identify as Native American, which is over three times the percentage in the US population. Hispanics and members of the LGBTQ+ community are also overrepresented in the company’s user base relative to their presence in the population at large. Moreover, 20 percent of Crisis Text Line’s users come from zip codes where household income is in the lowest 10 percent, and 10 percent of its users are under age 13.

Because suicide prediction tools impact civil liberties, and may disproportionately impact vulnerable groups, consumers should demand greater transparency.

 

Suggestions for regulating social suicide prediction

Companies engaged in suicide predictions should publish their suicide prediction algorithms for analysis by privacy experts, computer scientists, and mental health professionals. At a minimum, they should disclose the factors weighed to make predictions and the outcomes of subsequent interventions. In the European Union, Article 22 of the General Data Protection Regulation (GDPR) gives consumers the right “not to be subject to a decision based solely on automated processing, including profiling,” which may include profiling for suicide risk.

Article 15 of the GDPR may grant consumers a right to explanation, allowing consumers to request the categories of information being collected about them and to obtain “meaningful information about the logic involved . . . .”

The US lacks similar protections at the federal level. However, the California Consumer Protection Act of 2018 (CCPA) provides some safeguards, allowing consumers to request the categories of personal information collected and to ask that personal information be deleted. The CCPA includes inferred health data within its definition of personal information, which likely includes suicide predictions. While these safeguards will increase the transparency of social suicide prediction, the CCPA has significant gaps. For instance, it does not apply to non-profit organizations such as Crisis Text Line. Furthermore, the tech industry is lobbying to weaken the CCPA and to implement softer federal laws to preempt it.

One way to protect consumer safety would be to regulate social suicide prediction algorithms as software-based medical devices. The Food and Drug Administration (FDA) has collaborated with international medical device regulators to propose criteria for defining “Software as a Medical Device,” which include whether developers intend the software to diagnose, monitor, or alleviate a disease or injury. Social suicide prediction aims to monitor suicidal thoughts and prevent users from injuring themselves, therefore it should satisfy this requirement. The FDA also regulates mobile health apps and likely reserves the right to regulate those utilizing suicide prediction algorithms as they pose risks to consumers, including Facebook apps like Messenger.

Jack Balkin argues that the common law concept of the fiduciary should apply to companies that collect large volumes of consumer information. Like classic fiduciaries, such as doctors and lawyers, internet platforms possess more knowledge than their clients, and these power asymmetries create opportunities for exploitation.

Treating social suicide predictors as information fiduciaries would subject them to duties of care, loyalty, and confidentiality. Under the duty of care, companies could be required to ensure that their suicide prediction algorithms and interventions are safe. The duties of loyalty and confidentiality might require them to protect suicide prediction data and to abstain from selling it or otherwise using it to exploit consumers.

Alternatively, we might require that social suicide predictions be made under the guidance of licensed healthcare providers. For now, humans remain in the loop at Facebook and Crisis Text Line, yet that may change. Facebook has over two billion users, and as it continuously monitors user-generated content for a growing list of threats, the temptation to automate suicide prediction will grow. Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. Similar concerns exist in the context of criminal law. AI-based sentencing algorithms provide recidivism risk scores to judges during sentencing. Critics argue that even though judges retain ultimate decision-making power, defying software recommendations may be difficult. Like social suicide prediction tools, criminal sentencing algorithms are proprietary black boxes, and the logic behind their decisions is off-limits to people relying on their scores and those affected by them.

The due process clause of the Fourteenth Amendment protects people’s right to avoid unnecessary confinement. Only one state supreme court has considered a due process challenge to the use of proprietary algorithms in criminal sentencing. In State v. Loomis, the court upheld the petitioner’s sentence because it was not based solely on a risk assessment score. Nevertheless, in the context of suicide prediction, the risk of hospitalizing people without due process is a compelling reason to make the logic of AI-based suicide predictions more transparent.

Regardless of the regulatory approach taken, it is worth taking a step back to scrutinize social suicide prediction. Tech companies may like to “move fast and break things,” but suicide prediction is an area that should be pursued methodically and with great caution. Lives, liberty, and equality are on the line.

 

An earlier version of this article was published on the Balkinization blog. 

Mason Marks

Dr. Mason Marks is a Senior Fellow and Project Lead on the Project on Psychedelics Law and Regulation at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. He is an Assistant Professor of Law at the University of New Hampshire Franklin Pierce School of Law and an affiliated fellow at the Information Society Project at Yale Law School. View his full bio at masonmarks.com.

One thought to “Artificial Intelligence for Suicide Prediction”

  1. Without previous diagnose, from a professional in the field, we can´t assume the person needs hospitalization just because an algorithm says so. The Algorithm just stablish a pattern, but this pattern not necessarily drive to the same result in all the situations.
    It is an incredible tool that helps to react in real time, especially in very urgent situations, since we don´t know if this person is about to comite suicide or just thinking about it and asking for help. But the main issue is to rely on this tool in order to determine if a person needs hospitalization or confinement of any kind or what kind of attention needs.
    The information that these programs use in order to make its predictions is uploaded by human, subject to mistake, and considering this, we have to be very carefully and this kind of programs and the corporations that runs it, must be very transparent and restricted by the appropriate legislation that set the parameters to avoid citizen’s right violations.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.