By Adriana Krasniansky
This article is the first post in a four-part series looking at robots being developed for aging care, as well as their ethical implications. In this first article, we explore the rise of emotional companion robots such as the now-famous Paro, which are designed to soothe and comfort.
What are Emotional Companion Robots?
Emotional companion robots deliver on a very basic definition of the term “companionship:” they provide emotional soothing and a constant presence for users. Many emotional companion robots are modeled after animal-assisted therapy (AAT) pets, which are trained to calm and support individuals with Alzheimer’s, dementia, and cognitive impairments.
AAT in elder care can be challenging; animals risk injury to patients, trigger allergies, and require regular exercise (and bathroom breaks). Animals may also refuse to cooperate, which can further agitate patients. Emotional companion robots have similar demonstrated outcomes to AAT—reducing stress, improving mood, and stimulating conversation—without the logistical hang-ups of animal care.
The robot is billed as a type of non-medication antidepressant.
The market for emotional companion robots is international and rapidly growing. Top products include Hasbro’s Joy for All robotic pets, the Tombot Jennie robot golden retriever, and Japenese manufacturer AIST’s robotic seal, Paro. Paro is a recognizable retail and pop-culture figure; in Japan, more than 1,300 Paros have been sold since 2005, and Paro has been referenced in television shows The Simpsons and Master of None.
The following paragraphs outline ethical considerations facing emotional companion robotics today. While many of these questions will resurface later in the series, the conversation below is focused on this category.
As Jennie and Paro suggest, emotional companion robots tend to be modeled after real animals. They also are programmed to act zoologically: many notice light, respond to touch, and recognize their names.
These designs are largely driven by seniors, particularly by those who can no longer own a pet. “[Users] prefer a realistic texture and feel. But most importantly, they prefer realistic behaviors,” says Tombot founder Thomas E. Stevens in a Fast Company interview. Realistic design can also serve a therapeutic purpose: invoking nostalgia for memory-loss patients, which can increase feelings of self-esteem and social connectedness.
However, some individuals cannot distinguish between realistic robots and real animals. Are realistic designs deceiving? “We never present [Paro] as anything other than what he is: a robotic seal,” says Kathy Martyn, a lecturer in health sciences at the University of Brighton, in an interview with Quartz. Even when patients are not able to understand the difference, many practitioners believe that the benefits outweigh the risks and allow seniors to cuddle what they believe is a real pet.
Because emotional companion robots have a limited range of capabilities (today), there is a relatively high tolerance for realistic design. Yet, these interactions pose other ethical challenges.
An important argument in favor of emotional companion robotics is that they extend human care. They are available at all hours of the day or night, and they support finite caretaker (or therapy animal) patience and energy. In situations where human care is limited or unavailable, few argue that emotional companion robots are more detrimental than the status quo.
However, emotional companion robots may change how humans accept responsibilities to comfort one another. In her book Alone Together, MIT professor Sherry Turkle gives the example of Tim, a man who visits his mother in the nursing home:
Tim “said [Paro] made ‘walk[ing] out that door’ so much easier when he visited [his mother]… In the short term, Tim’s case may look as though it charts a positive development. An older person seems content; a child feels less guilty…Does the ‘feel-good moment’ provided by the robot deceive people into feeling less need to visit?” (Turkle 124-125).
Beyond responsibility, Turkle’s example asks us to consider if robotic support inevitably leads down the slippery slope of substitution. It also asks us to interrogate why robotic substitution might feel morally questionable, even for tasks as basic as emotional soothing. If someone feels emotionally comforted by a companion, why does it matter if that companion is a machine?
We’ll explore this question and others in the rest of our series. Our next article focuses on in-home task coordinators, also known as home hubs.