Robot and human facing each other. silhouetted against lit background

Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?

By Sonia Sethi

Do you say “thank you” to Alexa (or your preferred AI assistant)?

A rapid polling among my social media revealed that out of 76 participants, 51 percent thank their artificial intelligence (AI) assistant some or every time. When asked why/why not people express thanks, a myriad of interesting—albeit, entertaining—responses were received. There were common themes: saying thanks because it’s polite or it’s a habit, not saying thanks because “it’s just a database and not a human,” and the ever-present paranoia of a robot apocalypse.

But do you owe Alexa your politeness? Do you owe any moral consideration whatsoever? Read More

Zoom in of a dashboard focusing on the "App Store" widget

Nobody Reads the Terms and Conditions: A Digital Advanced Directive Might Be Our Solution

Could Facebook know your menstruation cycle?

In a recent Op-ed Piece, “You Just Clicked Yes. But, Do you Know Terms and Conditions of that Health App?,” I proposed that a mix of factors have given rise to the need to regulate web-based health services and apps. Since most of these applications do not fall under the Health Insurance Portability and Accountability Act (HIPAA), few people actually read through the Terms and Conditions, and also, the explosive growth of web-based health applications, the need for solutions is dire. Read More

Robotic hand placing metal cylinder in the circular hole of a wooden box, which also has a square and triangle-shaped hole

What We Lost When We Lost Google ATEAC

By Joanna Bryson

In a few weeks, the Advanced Technology External Advisory Council (ATEAC) was scheduled to come together for its first meeting. At that meeting, we were expected to “stress test” a proposed face recognition technology policy. “We were going to dedicate an entire day to it” (at least 1/4 the time they expected to get out of us.) The people I talked to at Google seemed profoundly disturbed by what “face recognition” could do. It’s not the first time I’ve heard that kind of deep concern – I’ve also heard it in completely unrelated one-on-one settings from a very diverse set of academics whose only commonality was working at the interface of machine learning and human computer interaction  (HCI). It isn’t just face recognition. It’s body posture, acoustics of speech and laughter, the way a pen is used on a tablet, and (famously) text. Privacy isn’t over, but it will never again be present in society without serious, deliberate, coordinated defense. Read More

What Should Happen to our Medical Records When We Die?

By Jon Cornwall

In the next 200 years, at least 20 billion people will die. A good proportion of these people are going to have electronic medical records, and that begs the question: what are we going to do with all this posthumous medical data? Despite the seemingly logical and inevitable application of medical data from deceased persons for research and healthcare both now and in the future, the issue of how best to manage posthumous medical records is currently unclear.

Presently, large medical data sets do exist and have their own uses, though largely these are data sets containing ‘anonymous’ data. In the future, if medicine is to deliver on the promise of truly ‘personalized’ medicine, then electronic medical records will potentially have increasing value and relevance for our generations of descendants. This will, however, entail the public having to consider how much privacy and anonymity they are willing to part with in regard to information arising from their medical records. After all, enabling our medical records with the power to influence personalized medicine for our descendants cannot happen without knowing who we, or our descendants, actually are.  Read More

Image of a young woman sitting in her bedroom in workout clothes checking a smart watch health app

Do You Know the Terms and Conditions of Your Health Apps? HIPAA, Privacy and the Growth of Digital Health

As more health care is being provided virtually through apps and web-based services, there is a need to take a closer look at whether users are fully aware of what they are consenting to, as it relates to their health information.

There needs to be a re-evaluation of how health apps obtain consent. At the same time, digital health offers an important opportunity to embolden privacy practices in digital platforms. We ought to use this important opportunity. Read More

Image of a laptop showing a doctor holding a stethoscope. Telemedicine abstract.

How to Think About Prognosis by Telemedicine

Recently in these very pages, Evan Selinger and Arthur Caplan responded to an article in which Joel Zivot defended the use of telemedical technologies in informing patients and their families of dire news, in the context of the viral story of a doctor informing the family of Ernest Quintana of his imminent death via robotic video-link. Zivot argued that the use of technology to deliver such news is not the problem and what matters is the communicative skills of the physician. Selinger and Caplan respond that patients have basically different views on the propriety of using technology in these ways, and urge a regime of informed consent.

Selinger and Caplan are probably right on the short term policy question.

While we know there is a great deal of diversity in whether people think using telemedicine in this way is disrespectful, there is also no obvious answer among the alternatives. Warning people that this might happen and letting them opt-out, then, offers a short-term way to respect people’s preferences. And, as Selinger and Caplan acknowledge, that may be all that is needed. Over time, communication like this may become as anodyne as today it seems avant-garde. Read More

Image of a surgery room with a robot whose screen has a doctor with a stethoscope

The Problem With Doctors Communicating via Robot is Attitudes About Technology, Not Poorly Communicating Doctors

By Evan Selinger and Arthur Caplan

Perhaps you’ve seen the debate? A physician used video chat technology to inform a hospitalized Ernest Quintana and his family that he would be dying sooner than they expected. After he passed away, they objected to how the news was delivered. Over at Slate, Joel Zivot an anesthesiologist and ICU physician, responded to the uproar with an essay titled, “In Defense of Telling Patients They’re Dying Via Robot.” Read More

Blue background that reads "facebook" with a silhouette of a person looking down on his phone in front

On Social Suicide Prevention, Don’t Let the Perfect be the Enemy of the Good

In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.

But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist. Read More

ONC’s Proposed Rule is a Breakthrough in Patient Empowerment

By Adrian Gropper

Imagine solving wicked problems of patient matching, consent, and a patient-centered longitudinal health record while also enabling a world of new healthcare services for patients and physicians to use. The long-awaited Notice of Proposed Rulemaking (NPRM) on information blocking from the Office of the National Coordinator for Health Information Technology (ONC) promises nothing less. 

Having data automatically follow the patient is a laudable goal but difficult for reasons of privacy, security, and institutional workflow. The privacy issues are clear if you use surveillance as the mechanism to follow the patient. Do patients know they’re under surveillance? By whom? Is there one surveillance agency or are there dozens in real-world practice? Can a patient choose who does the surveillance and which health encounters, including behavioral health, social relationships, location, and finance are excluded from the surveillance? Read More