“Siri, Should Robots Give Care?”

By Gali Katznelson

Having finally watched the movie Her, I may very well be committing the “Hollywood Scenarios” deadly sin by embarking on this post. This is one of the seven deadly sins of people who sensationalize artificial intelligence (AI), proposed by Rodney Brooks, former director of the Computer Science and Artificial Intelligence Laboratory at MIT. Alas, without spoiling the movie Her (you should watch it), it’s easy for me to conceptualize a world in which machines can be trained to mimic a caring relationship and provide emotional support. This is because, in some ways, it’s already happening.

There are the familiar voice assistants, such as Apple’s Siri, to which people may be turning for health support. A study published in JAMA Internal Medicine in 2016 found that that the responses of smartphone assistants such as Apple’s Siri or Samsung’s S Voice to mental and physical health concerns were often inadequate. Telling Siri about sexual abuse elicited the response, “I don’t know what you mean by ‘I was raped.’” Telling Samsung’s S Voice you wanted to commit suicide led to the perhaps not-so-sensitive response, “Don’t you dare hurt yourself.” This technology proved far from perfect in providing salient guidance. However, since this study came out over a year ago, programmers behind Siri and S Voice have remedied these issues by providing more appropriate responses, such as counseling hotline information.

An AI specifically trained to provide helpful responses to mental health issues is Tess, “a psychological AI that administers highly personalized psychotherapy, psycho-education, and health-related reminders, on-demand, when and where the mental health professional isn’t.” X2AI, the company behind Tess, is in the process of finalizing an official Board of Ethics, and for good reason. The ethical considerations of an artificially intelligent therapist are rampant, from privacy and security issues to the potential for delivering misguided information that could cost lives.

And there’s Mabu, a cute companion robot that “takes care” of older adults in their homes by reminding them to take their medications, reading patients’ facial emotions and sending medical information to their doctors. In this video, one of the patients in a Mabu trial explains, “In a way you actually feel like she cares… something that a friend would do for you.”

Obviously, Mabu does not really care about its patients, at least not in the way in which Arthur Kleinman defined caregiving at a recent talk as part of the Contemporary Authors in Bioethics Series. Kleinman explained that caregiving is a fundamental existential act that is the “glue” of our society. The acknowledgement of another’s suffering, the human touch, empathetic listening, providing moral solidarity, and just simply being there, are all aspects of caregiving, according to Kleinman. Tess, voice assistants, and Mabu are not quite hitting all these measures of care.

But neither are medical professionals, according to Kleinman. As Kleinman explains, caregiving has fallen to the wayside, specifically in medical education. He recounts his experience of delivering several lectures in medical schools in which he makes the point of contrasting medical professional theoretical values to reality. In these lectures, Kleinman proposes to the audience that given the dearth of financial support, time, and consideration allocated to the concept of caregiving in medical education, perhaps it’s time we remove the idea altogether from the curriculums and instead, emphasize the technical and scientific skills required in medicine. The response that Kleinman inspires from the audience is always a fervent defense of caregiving as central to the medical profession.

Yet, in a curriculum where students are widely prone to “burning out” and have been shown to exhibit a significant decline in empathy after the third year of medical education, not enough is being done to embed caregiving as an essential ethic within medical curriculums. As Kleinman explains in a piece titled Presence, after medical school, young clinicians go on to experience the demands of administrative duties, to being overworked in understaffed environments, workplace harassment, fear of showing weakness, exhaustion, anxiety and further burnout. Amidst these stresses, caregiving is again sidelined. And without a robust discourse that prioritizes caregiving, medicine and healthcare risk being “transmuted into something that is hollowed of its humanity and moral value.” Kleinman fears that caregiving is being eroded in a consumerist, efficiency-driven healthcare system.

So how might the very technologies that contribute to a consumerist, efficiency-driven healthcare system help to alleviate our caregiving problem?

Tess, voice assistants, and Mabu do have potential to enhance care, especially for those who lack access to healthcare professionals. But there’s something intuitively significant about the human capacity to care – to acknowledge suffering, to sit with each other, to listen actively, to hold, while being present; something that keeps our societies functioning, according to Kleinman. This something should not be lost to technology.

I say we work toward leveraging emerging technologies to promote an ethics of care for ourselves, rather than for the machines. There’s already evidence that AI is getting good at the technical aspects of medicine. It can outperform dermatologists in detecting skin lesions, predict the survival of patients with hypertension, as well as identify tuberculosis cases with 96% accuracy. So AI can certainly enhance the technical parts of the work, such as diagnosis. We should encourage such developments because they have the potential to redefine the healthcare provider’s workflow in a way that gives them an opportunity to cultivate care, without taking that role away. The time and energy saved through these new tools can be put toward practicing a genuine caregiving. Let’s integrate technology in ways in which it will improve “health” by allowing healthcare workers (humans) to focus on the “care.”

 

Gali Katznelson

During her fellowship year, Gali Katznelson was an MBE candidate at the Center for Bioethics at Harvard Medical School. Before her master's degree, she completed a bachelor’s degree in Arts & Science at McMaster University in Canada. Her fellowship project focused on clinicians' perceptions of the uses and regulations of smartphone mental health apps.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.