Image of a surgery room with a robot whose screen has a doctor with a stethoscope

The Problem With Doctors Communicating via Robot is Attitudes About Technology, Not Poorly Communicating Doctors

By Evan Selinger and Arthur Caplan

Perhaps you’ve seen the debate? A physician used video chat technology to inform a hospitalized Ernest Quintana and his family that he would be dying sooner than they expected. After he passed away, they objected to how the news was delivered. Over at Slate, Joel Zivot an anesthesiologist and ICU physician, responded to the uproar with an essay titled, “In Defense of Telling Patients They’re Dying Via Robot.”

Zivot draws from personal experience, opening his essay by writing:

“At 2 a.m. in February, I found myself speaking with the family of a dying man. We had never met before, and I had only just learned of the patient. As an ICU doctor, I have been in this situation on many occasions, but there was something new this time. The family was 200 miles away, and we were talking through a video camera. I was staffing the electronic intensive care unit, complete with a headset, adjustable two-way video camera, and six screens of streaming data.”

As he the essay goes on, Zivot lets us know that he did a great job of overcoming the technological constraints. “Slowly and calmly, I explained who I was and how the camera worked. I looked directly into my camera lens so that on the video monitor, my eyes would meet theirs. I acknowledged the strangeness of the technology and worked through the data in an unhurried way. After some back and forth, we all began to relax and accept the communication.”

Zivot concludes that the real problem is communication, not technology. Physicians, he argues, simply aren’t well-trained to acquire the “technical skill” for delivering bad news. This response gets at something important while still missing the forest for the trees. Like Zivot, we believe patients deserve better than cold, machine-like physicians who don’t display warmth or compassion when talking about serious medical issues. But what’s at stake here goes beyond communicative ability and directly concerns attitudes about technology. To explain why, we need to first take a step back.

Choice and the Technological Fallacy

The most telling part of Zivot’s narrative is where he says the family came to “accept” that this is how the conversation would take place.  Of course, we don’t know if the family took things as well as he describes. It’s just his perspective after all. But even if Zivot is right, it’s still a self-congratulatory account that overlooks something crucial. Based on his description, it seems that the family didn’t have a choice in the matter. Zivot doesn’t tell us that he asked for anyone’s consent, and the way he writes suggests that he simply presumes things would have been worse for the family if they had to wait until a later time to learn about the cause of death.

For the sake of argument, let’s say Zivot tells this story perfectly and is right about every detail. The problem, then, is that he commits a technological fallacy. He makes it seem like the issue is just about conversational style and method—as if skillful conversation can transcend the constraints of any technological medium and make conversations about anything appropriate no matter what technology is used.

What Zivot fails to consider is that right now people have different beliefs about technologically mediated communication. It’s true that technology is being used in all kinds of surprising ways. Patients are talking with doctors they have personal relationships with over apps, as well as doctors they’ve never met before. One of us has even attended an online funeral.

Nevertheless, there are those who believe that some topics are only appropriate to discuss face-to-face—not on the phone, not through texting, not on social media, not on e-mail, and not through video chat (even though it might be the best way to simulate in-person discussions). That Zivot has experienced success talking to some patients about death and dying over the internet does not invalidate others feeling that if they were in the same situation their lives would be trivialized.

Maybe in the future, norms will shift and today’s dismayed reactions will seem as quaint as the famous Seinfeld walk-and-talk scene where Jerry is horrified that Elaine had the audacity to ask about a friend’s father’s health over a cell phone. But we’re not having this conversation in the future. Right here and now people are justified in believing that Mr. Quintana’s dignity was disrespected. What they are objecting to is technology being used at all, and not, as Zivot suggests, how it’s used. They would still feel disrespected by the most empathetic and conscientious physician talking over a screen.

Is there any clear, objective criterion for determining which conversations are only appropriate to have in person? No, there isn’t. Many variables matter, not only what is being said, but also who is saying it and how they’re communicating. But when the stakes are as high as death and dying in a hospital, the only appropriate judges of whether alternatives to face-to-face dialog are appropriate are the patients themselves.

If hospitals don’t defer to patients on this issue, they fail to respect autonomy.

That’s why in “How Physicians Should and Shouldn’t Talk With Dying Patients,” we argue that the only ethical policy is one requiring informed consent.

“Upon admission to hospital, patients should be given a form that explains that the hospital provides telemedicine for a variety of purposes, which could include discussing grim prognoses. The form should provide a clear and easy-to-read explanation of why this is the case, explain what follow-up options are available and what alternatives exist, and then seek consent only from patients who believe this approach matches up with their own values.”

One challenge that’s been thrown to us is how the informed consent document should read to avoid the problem of boilerplate prose. We agree this is a difficult problem. Jargon and legalize can make a mockery out of autonomy and demean our very humanity. Still, getting the words right is an accomplishable goal. Since it can’t be met until hospital policy changes, the time for change is now.

 

Evan Selinger is a Professor of Philosophy at Rochester Institute of Technology and a Senior Fellow at The Future of Privacy Forum. His most recent book, which is co-authored with Brett Frischmann, is Re-Engineering Humanity. His most recent anthology, which is co-edited with Jules Polonetsky and Omer Tene, is The Cambridge Handbook of Consumer Privacy. Cambridge University Press published both texts in 2018. Committed to public philosophy, he has written for many newspapers, magazines, and blogs, including The Guardian, The Atlantic, Slate, The Nation, Wired, and The Wall Street Journal.

Arthur L. Caplan, PhD is currently the Drs. William F and Virginia Connolly Mitty Professor and founding head of the Division of Medical Ethics at NYU School of Medicine in New York City. Prior to coming to NYU School of Medicine, Dr. Caplan was the Sidney D. Caplan Professor of Bioethics at the University of Pennsylvania Perelman School of Medicine in Philadelphia, where he created the Center for Bioethics and the Department of Medical Ethics. Caplan has also taught at the University of Minnesota, where he founded the Center for Biomedical Ethics, the University of Pittsburgh, and Columbia University.  He received his PhD from Columbia University.

Evan Selinger

Evan Selinger is a Professor of Philosophy at Rochester Institute of Technology and a Senior Fellow at The Future of Privacy Forum. His most recent book, which is co-authored with Brett Frischmann, is Re-Engineering Humanity. His most recent anthology, which is co-edited with Jules Polonetsky and Omer Tene, is The Cambridge Handbook of Consumer Privacy. Cambridge University Press published both texts in 2018. Committed to public philosophy, he has written for many newspapers, magazines, and blogs, including The Guardian, The Atlantic, Slate, The Nation, Wired, and The Wall Street Journal.

One thought to “The Problem With Doctors Communicating via Robot is Attitudes About Technology, Not Poorly Communicating Doctors”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.