a home hub featuring icons of all the tasks it can assist with in a thinking cloud

Exploring Elder Care Robotics: Voice Assistants and Home Hubs

This article is part of a four-part series that researches how robotics are being developed for aging care and investigates their ethical implications. In our first article, we explored emotional companion robots, which soothe and comfort patients experiencing loneliness, depression, or diseases such as Alzheimer’s. Today, we look at voice assistants and home hubs—robots designed to coordinate and simplify daily tasks around the house. 

What are Voice Assistants and Home Hubs?

Unlike other robots in this series, you are probably familiar with voice assistants and home hubs. These robots, which include Amazon Echo, Google Home, Apple Siri, Samsung Ballie, and Nest, respond to human commands (voice, motion, or input) to complete tasks like preheating the oven, playing a podcast, or refilling a prescription. Several devices also incorporate artificial intelligence (AI) to learn household patterns and anticipate needs.  However, unlike social robots (covered later in this series), voice assistants do not proactively engage with users unless programmed or commanded.   Read More

Illustration of a person running away carrying "stolen" 1's and 0's

Measuring Health Privacy – Part II

This piece was part of a symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. The symposium was originally posted through the Northeastern University Law Review Online Forum.

Promises and Perils of Emerging Health Innovations Blog Symposium

We are pleased to present this symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. As a note, additional detailed analyses of issues discussed during the conference will be published in the 2021 Winter Issue of the Northeastern University Law Review.

Throughout the two-day conference, speakers and attendees discussed how innovations, including artificial intelligence, robotics, mobile technology, gene therapies, pharmaceuticals, big data analytics, tele- and virtual health care delivery, and new models of delivery, such as accountable care organizations (ACOs), retail clinics, and medical-legal partnerships (MLPs), have entered and changed the healthcare market. More dramatic innovations and market disruptions are likely in the years to come. These new technologies and market disruptions offer immense promise to advance health care quality and efficiency, as well as improve provider and patient engagement. Success will depend, however, on careful consideration of potential perils and well-planned interventions to ensure new methods ultimately further, rather than diminish, the health of patients, especially those who are the most vulnerable.

In this two-part post for the Promises and Perils of Emerging Health Innovations blog symposium Ignacio Cofone engages in a discussion centered on the importance of addressing patients’ concerns when introducing new health technologies. While privacy risks may not always be avoided altogether, Cofone posits that privacy risks (and their potential costs) should be weighed against any and all health benefits innovative technology and treatments may have. To do so, Cofone introduces the concept of using health economics and a Quality-Adjusted Life Year (QALY) framework as a way to evaluate the weight and significance of the costs and benefits related to health technologies that may raise patient privacy concerns.

Measuring Health Privacy – Part II

By Ignacio N. Cofone

Read More

Illustration of cascading 1's and 0's, blue text on a black background

Measuring Health Privacy – Part I

This piece was part of a symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. The symposium was originally posted through the Northeastern University Law Review Online Forum.

Promises and Perils of Emerging Health Innovations Blog Symposium

We are pleased to present this symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. As a note, additional detailed analyses of issues discussed during the conference will be published in the 2021 Winter Issue of the Northeastern University Law Review.

Throughout the two-day conference, speakers and attendees discussed how innovations, including artificial intelligence, robotics, mobile technology, gene therapies, pharmaceuticals, big data analytics, tele- and virtual health care delivery, and new models of delivery, such as accountable care organizations (ACOs), retail clinics, and medical-legal partnerships (MLPs), have entered and changed the healthcare market. More dramatic innovations and market disruptions are likely in the years to come. These new technologies and market disruptions offer immense promise to advance health care quality and efficiency, and improve provider and patient engagement. Success will depend, however, on careful consideration of potential perils and well-planned interventions to ensure new methods ultimately further, rather than diminish, the health of patients, especially those who are the most vulnerable.

In this two-part post for the Promises and Perils of Emerging Health Innovations blog symposium, Ignacio Cofone engages in a discussion centered on the importance of addressing patients’ concerns when introducing new health technologies. While privacy risks may not always be avoided altogether, Cofone posits that privacy risks (and their potential costs) should be weighed against all health benefits innovative technology and treatments may have. To do so, Cofone introduces the concept of using health economics and a Quality-Adjusted Life Year (QALY) framework to evaluate the weight and significance of the costs and benefits related to health technologies that may raise patient privacy concerns.

Measuring Health Privacy – Part I

Read More

Close up of a computer screen displaying code

What Google Isn’t Saying About Your Health Records

By Adrian Gropper

Google’s semi-secret deal with Ascension is testing the limits of HIPAA as society grapples with the future impact of machine learning and artificial intelligence.

I. Glenn Cohen points out that HIPAA may not be keeping up with our methods of consent by patients and society on the ways personal data is used. Is prior consent, particularly consent from vulnerable patients seeking care, a good way to regulate secret commercial deals with their caregivers? The answer to a question is strongly influenced by how you ask the questions.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More

Photograph from above of a health care provider taking a patient's blood pressure.

Diving Deeper into Amazon Alexa’s HIPAA Compliance

By Adriana Krasniansky

Earlier this year, consumer technology company Amazon made waves in health care when it announced that its Alexa Skills Kit, a suite of tools for building voice programs, would be HIPAA compliant. Using the Alexa Skills Kit, companies could build voice experiences for Amazon Echo devices that communicate personal health information with patients. 

Amazon initially limited access to its HIPAA-updated voice platform to six health care companies, ranging from pharmacy benefit managers (PBMs) to hospitals. However, Amazon plans to expand access and has identified health care as a top focus area. Given Thursday’s announcement of new Alexa-enabled wearables (earbuds, glasses, a biometric ring)—likely indicators of upcoming personal health applications—let’s dive deeper into Alexa’s HIPAA compliance and its implications for the health care industry.
Read More

Photograph of a doctor holding a headset sitting in front of a laptop

Navigating Sensitive Hospital Conversations in the Age of Telemedicine

By Adriana Krasniansky

On March 5, 2019, a terminally ill patient from Fremont, California, learned that he was expected to die within several days. The doctor who delivered the news did so via a robotic video teleconferencing device. 

Ernest Quintana, a 79-year-old patient with a previously-diagnosed terminal lung condition, was taken to the Kaiser Permanente Fremont Medical Center emergency room after reporting shortness of breath. His 16-year-old granddaughter, Annalisia Wilharm, was with him when a nurse stopped by and said that a doctor would visit shortly to deliver Mr. Quintana’s results. 

The video below, recorded by Ms. Wilharm, shows Mr. Quintana’s consultation with a critical care doctor through an Ava Robotics telepresence device—in which the doctor explains Mr. Quintana’s rapidly worsening condition and suggests transitioning to comfort care. Ms. Wilharm and her family chose to share the video with local media and on Facebook, inciting a debate around the legal and ethical challenges of using telemedicine in critical care conversations. 

Read More

Picture of doctor neck down using an ipad with digital health graphics superimposed

Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

Robot and human facing each other. silhouetted against lit background

Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?

By Sonia Sethi

Do you say “thank you” to Alexa (or your preferred AI assistant)?

A rapid polling among my social media revealed that out of 76 participants, 51 percent thank their artificial intelligence (AI) assistant some or every time. When asked why/why not people express thanks, a myriad of interesting—albeit, entertaining—responses were received. There were common themes: saying thanks because it’s polite or it’s a habit, not saying thanks because “it’s just a database and not a human,” and the ever-present paranoia of a robot apocalypse.

But do you owe Alexa your politeness? Do you owe any moral consideration whatsoever? Read More