image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

What are Our Duties and Moral Responsibilities Toward Humans when Constructing AI?

Much of what we fear about artificial intelligence comes down to our underlying values and perception about life itself, as well as the place of the human in that life. The New Yorker cover last week was a telling example of the kind of dystopic societies we claim we wish to avoid.

I say “claim” not accidently, for in some respects the nascent stages of such a society do already exist; and perhaps they have existed for longer than we realize or care to admit. Regimes of power, what Michel Foucault called biopolitics, are embedded in our social institutions and in the mechanisms, technologies, and strategies by which human life is managed in the modern world. Accordingly, this arrangement could be positive, neutral, or nefarious—for it all depends on whether or not these institutions are used to subjugate (e.g. racism) or liberate (e.g. rights) the human being; whether they infringe upon the sovereignty of the individual or uphold the sovereignty of the state and the rule of law; in short, biopower is the impact of political power on all domains of human life. This is all the more pronounced today in the extent to which technological advances have enabled biopower to stretch beyond the political to almost all facets of daily life in the modern world. Read More