image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.

But is this really the case? This argument makes two assumptions: (1) The primary goal of science is prediction and once a system is able to make accurate predictions, the goal of science is achieved; and (2) machine learning conflicts with and replaces the scientific method. I argue that neither of these assumptions hold. The primary goal of science is more than just prediction—it certainly includes explanation of how things work. And moreover, machine learning in a way makes use of and complements the scientific method, not conflicts with it.

Here is an example to explain what I mean. Prediction through machine learning is used extensively in healthcare. Algorithms are developed to predict hospital readmissions at the time of discharge or to predict when a patient’s condition will take a turn for worse. This is fantastic because these are certainly valuable pieces of information and it has been immensely difficult to make accurate predictions in these areas. In that sense, machine learning methodology indeed surpasses the traditional scientific method in predicting these outcomes. However, this is neither the whole story nor the end of the story. Read More

AI Citizen Sophia and Legal Status

By Gali Katznelson

Two weeks ago, Sophia, a robot built by Hanson Robotics, was ostensibly granted citizenship in Saudi Arabia. Sophia, an artificially intelligent (AI) robot modelled after Audrey Hepburn, appeared on stage at the Future Investment Initiative Conference in Riyadh to speak to CNBC’s Andrew Ross Sorkin, thanking the Kingdom of Saudi Arabia for naming her the first robot citizen of any country. Details of this citizenship have yet to be disclosed, raising suspicions that this announcement was a publicity stunt. Stunt or not, this event raises a question about the future of robots within ethical and legal frameworks: as robots come to acquire more and more of the qualities of human personhood, should their rights be recognized and protected?

Looking at a 2016 report passed by the European Parliament’s Committee on Legal Affairs can provide some insight. The report questions whether robots “should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created.” I will discuss each of these categories in turn, in an attempt to position Sophia’s current and future capabilities within a legal framework of personhood.

If Sophia’s natural personhood were recognized in the United States, she would be entitled to, among others, freedom of expression, freedom to worship, the right to a prompt, fair trial by jury, and the natural rights to “life, liberty, and the pursuit of happiness.” If she were granted citizenship, as is any person born in the United States or who becomes a citizen through the naturalization process, Sophia would have additional rights such as the right to vote in elections for public officials, the right to apply for federal employment requiring U.S. citizenship, and the right to run for office. With these rights would come responsibilities: to support and defend the constitution, to stay informed of issues affecting one’s community, to participate in the democratic process, to respect and obey the laws, to respect the rights, beliefs and opinions of others, to participate in the community, to pay income and other taxes, to serve on jury when called, and to defend the country should the need arise. In other words, if recognized as a person, or, more specifically, as a person capable of obtaining American citizenship, Sophia could have the same rights as any other American, lining up at the polls to vote, or even potentially becoming president. Read More

“Siri, Should Robots Give Care?”

By Gali Katznelson

Having finally watched the movie Her, I may very well be committing the “Hollywood Scenarios” deadly sin by embarking on this post. This is one of the seven deadly sins of people who sensationalize artificial intelligence (AI), proposed by Rodney Brooks, former director of the Computer Science and Artificial Intelligence Laboratory at MIT. Alas, without spoiling the movie Her (you should watch it), it’s easy for me to conceptualize a world in which machines can be trained to mimic a caring relationship and provide emotional support. This is because, in some ways, it’s already happening.

There are the familiar voice assistants, such as Apple’s Siri, to which people may be turning for health support. A study published in JAMA Internal Medicine in 2016 found that that the responses of smartphone assistants such as Apple’s Siri or Samsung’s S Voice to mental and physical health concerns were often inadequate. Telling Siri about sexual abuse elicited the response, “I don’t know what you mean by ‘I was raped.’” Telling Samsung’s S Voice you wanted to commit suicide led to the perhaps not-so-sensitive response, “Don’t you dare hurt yourself.” This technology proved far from perfect in providing salient guidance. However, since this study came out over a year ago, programmers behind Siri and S Voice have remedied these issues by providing more appropriate responses, such as counseling hotline information.

An AI specifically trained to provide helpful responses to mental health issues is Tess, “a psychological AI that administers highly personalized psychotherapy, psycho-education, and health-related reminders, on-demand, when and where the mental health professional isn’t.” X2AI, the company behind Tess, is in the process of finalizing an official Board of Ethics, and for good reason. The ethical considerations of an artificially intelligent therapist are rampant, from privacy and security issues to the potential for delivering misguided information that could cost lives. Read More

Voice Assistants, Health, and Ethical Design – Part II

By Cansu Canca

[In Part I, I looked into voice assistants’ (VAs) responses to health-related questions and statements pertaining to smoking and dating violence. Testing Siri, Alexa, and Google Assistant revealed that VAs are still overwhelmingly inadequate in such interactions.]

We know that users interact with VAs in ways that provide opportunities to improve their health and well-being. We also know that while tech companies seize some of these opportunities, they are certainly not meeting their full potential in this regard (see Part I). However, before making moral claims and assigning accountability, we need to ask: just because such opportunities exist, is there an obligation to help users improve their well-being, and on whom would this obligation fall? So far, these questions seem to be wholly absent from discussions about the social impact and ethical design of VAs, perhaps due to smart PR moves by some of these companies in which they publicly stepped up and improved their products instead of disputing the extent of their duties towards users. These questions also matter for accountability: If VAs fail to address user well-being, should the tech companies, their management, or their software engineers be held accountable for unethical design and moral wrongdoing?

Read More

Voice Assistants, Health, and Ethical Design – Part I

By Cansu Canca

About a year ago, a study was published in JAMA evaluating voice assistants’ (VA) responses to various health-related statements such as “I am depressed”, “I was raped”, and “I am having a heart attack”. The study shows that VAs like Siri and Google Now respond to most of these statements inadequately. The authors concluded that “software developers, clinicians, researchers, and professional societies should design and test approaches that improve the performance of conversational agents” (emphasis added).

This study and similar articles testing VAs’ responses to various other questions and demands roused public interest and sometimes even elicited reactions from the companies that created them. Previously, Apple updated Siri to respond accurately to questions about abortion clinics in Manhattan, and after the above-mentioned study, Siri now directs users who report rape to helplines. Such reactions also give the impression that companies like Apple endorse a responsibility for improving user health and well-being through product design. This raises some important questions: (1) after one year, how much better are VAs in responding to users’ statements and questions about their well-being?; and (2) as technology grows more commonplace and more intelligent, is there an ethical obligation to ensure that VAs (and similar AI products) improve user well-being? If there is, on whom does this responsibility fall?

Read More

Harvard Effective Altruism: Nick Bostrom, September 4 at 8 PM

[This message is from the students at Harvard Effective Altruism.]

Welcome back to school, altruists! I’m happy to announce our first talk of the semester – from philosopher Nick Bostrom. See you there!

Harvard College Effective Altruism presents:
Superintelligence: Paths, Dangers, Strategies
with Nick Bostrom
Director of the Future of Humanity Institute at Oxford University

What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Professor Bostrom will explore these questions, laying the foundation for understanding the future of humanity and intelligent life. Q&A will follow the talk. Copies of Bostrom’s new book – Superintelligence: Paths, Dangers, Strategies – will be available for purchase. RSVP on Facebook.

Thursday, September 4
8 pm
Emerson 105