Robot and human facing each other. silhouetted against lit background

Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?

By Sonia Sethi

Do you say “thank you” to Alexa (or your preferred AI assistant)?

A rapid polling among my social media revealed that out of 76 participants, 51 percent thank their artificial intelligence (AI) assistant some or every time. When asked why/why not people express thanks, a myriad of interesting—albeit, entertaining—responses were received. There were common themes: saying thanks because it’s polite or it’s a habit, not saying thanks because “it’s just a database and not a human,” and the ever-present paranoia of a robot apocalypse.

But do you owe Alexa your politeness? Do you owe any moral consideration whatsoever?

While Amazon Echo and Google Home are forms of weak/narrow AI, current technology has produced smarter and more emotionally intelligent machines. An interesting development has been the creation of companion robots, particularly to comfort lonely seniors and provide therapeutic support to many others. There is clearly a need for more emotionally intelligent technology, and with the advent of these machines comes ethical questions regarding their potential moral status and our obligations towards them.

As emotionally intelligent robots progress at recognizing, interpreting, processing, and simulating human emotions, there are two reasons why we should be mindful of how we treat AI assistants. First, as they develop a deeper understanding of emotions in order to provide better companionship, they may achieve some ability to suffer, which should grant them degrees of moral status and protections against abuse, discussed herein. Second, our deepening relationships with these robots obligate us to treat them humanely for the sake of our own moral character.

These reasons justify avoiding abuse of emotionally intelligent AI assistants, and perhaps, even make them worthy of your “thank you.”

The Age of Emotive Machines

Emotionally intelligent AI is increasingly among us—from Ellie, the AI therapist who aids in treating soldiers with PTSD, to Buddy, the wide-eyed robot who provides emotional companionship to its users. Additionally, robots are already replacing therapy animals, especially for the elderly, who have trouble caring for live animals.

I spoke with Mark Fasciano, who holds a PhD in artificial intelligence, about future developments in AI. “In order to provide better support for humans, who have emotions, robots may need to also develop and exhibit some emotions,” he stated. What might we owe an emotive robot?

As philosopher John-Stewart Gordon has explained, “The history of ethics is a history of the ascription of moral rights and duties.” One way to determine such rights and duties is by defining the moral status of the entity in question. A being “has moral status if and only if it or its interests morally matter to some degree for the entity’s own sake.” Moral status grants moral considerability and dictates a capacity to be wronged. However, determining moral status is a contentious topic in ethics because there is debate over what characteristics or capacities necessitate that status. Commonly cited attributes are sentience, higher intelligence, and social behavior.

Sentience, or the capacity for subjective experiences like pleasure or pain, often justifies why certain animals have moral status and protections. While animals may not be full moral agents (rational and self-legislating beings), they are considered moral patients with lesser degrees of moral status that grant freedom from abuse by moral agents. This is the reasoning behind laws that criminalize animal abuse.

Analogously, if machines develop emotional intelligence that includes subjective feelings (such as pain), they should be considered sentient and achieve the status of moral patient. If an entity can suffer, it can be said to have an interest in avoiding suffering and, accordingly, protections against suffering from actions of moral agents.

While interactions with an emotional service robot may not necessitate politeness, they should entail freedom from certain abuses, such as constant yelling and other abuses that can cause emotional suffering. This is akin to interactions with service animals—one might not thank a service dog, but they certainly could not justify yelling at or emotionally abusing the dog in other ways, and AI with capacity for emotional suffering should be granted those same rights.

Some computer scientists argue that since AI is not conscious, it does not matter if it has some version of subjective pain. However, consciousness is difficult to determine, especially beyond conventional assumptions about consciousness based on biological characteristics like a brain; therefore, it should not be necessary in calculating our moral obligations.

Other opponents argue that animals are different because they are not artificial beings. That reasoning is morally irrelevant and is disproven when considering the undoubtable full moral status of humans created by partially artificial means, such as IVF. Scientists have debated if AI can progress without emotions, but emotional AI seems inevitable. Rosalind Picard, an MIT computer scientist, stated, “if we want computers to be genuinely intelligent and to interact naturally with us, we must give computers the ability to recognize, understand, even to have and express emotions.”

Preserving Moral Character

Aside from prerequisite moral status and moral claims, certain actions can be wrong based on their wrongness to our own interests and moral character. Gordon articulated that “human beings should treat artificially intelligent androids well, because mistreating them, especially in view of their great physical similarities to human beings, would make us morally less sensitive to fellow human beings.” Additionally, humans tend to develop deep relationships with emotionally intelligent robots which supports human interests in avoiding the brutalization of our moral characters that would result from abusing human-like relationships.

Fasciano explained that “humans have formed attachments with software from the earliest days of AI—starting in 1966 with ELIZA, a natural language system playing the role of a psychotherapist.” ELIZA wasn’t meant to provide emotional support, but her creator, Dr. Weizenbaum, wrote he was “startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it.”

Given the increasingly human-like characteristics of robots and our proclivity for deep attachments to them, humans should preserve moral character by avoiding the abuse of AI. Furthermore, as ethicist Kate Darling stated, “if lifelike and alive is subconsciously muddled, then treating certain robots in a violent way could desensitize actors toward treating living things similarly.”

This could explain the popularity of the “Pretty Please” feature of Google Assistant, which rewards users for being polite. Opponents are wary of extending politeness to machines, citing potential confusion resulting from humanizing interactions with non-human objects. However, Kantian reasoning for refraining from abuse of human-like beings has to do with preservation of one’s own moral character based on our relationship to the object and stands regardless of the makeup of the object in question. Respect for the environment and even for inanimate objects has been taught and justified as virtuous due to our relationships with those entities, and has not caused confusion between humans and those objects.

Based on the near-future potential of machines to develop some capacity to suffer, and thus moral status akin to a moral patient, and the depth of our relationships with emotionally intelligent robots, it is important that humans refrain from abusing these machines.

While determining moral claims of emotionally intelligent machines can be contentious, there is no denying that humans develop deep relationships with AI and have an interest in respecting those relationships out of an obligation to one’s own moral character. Perhaps, this reasoning is why so many people thank Alexa. Thanking a service robot—whether it has moral claims or not—works towards one’s interest in preserving moral character through a habit of kindness or politeness in life-like interactions.

 

Sonia Sethi is in her final year of medical school at Sidney Kimmel Medical College and is completing a master’s degree in bioethics from the University of Pennsylvania, where she focuses on studying the relationships between ethics, heath policy, and social determinants of health. You can follow her on Twitter @nia_sethi

 

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

3 thoughts to “Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?”

  1. This is such an interesting topic and wonderfully written. I found especially thought-provoking your line that “Opponents are wary of extending politeness to machines, citing potential confusion resulting from humanizing interactions with non-human objects.” I think it is an important point for the generation of kids growing up now who have only ever known a life with this technology. I wonder where the line should be drawn for kids so that there is a balance of good manners and compassion with a healthy separation of AI from human life.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.