Picture of doctor neck down using an ipad with digital health graphics superimposed

Symposium Introduction: Ethical, Legal, and Social Implications of Deep Phenotyping

This post is the introduction to our Ethical, Legal, and Social Implications of Deep Phenotyping symposium. All contributions to the symposium will be available here.

By Francis X. Shen

This digital symposium explores the ethical, legal, and social implications of advances in deep phenotyping in psychiatry research.

Deep phenotyping in psychiatric research and practice is a term used to describe the collection and analysis of multiple streams of behavioral and biological data, some of this data collected around the clock, to identify and intervene in critical health events.

By combining 24/7 data — on location, movement, email and text communications, and social media — with brain scans, genetics/genomics, neuropsychological batteries, and clinical interviews, researchers will have an unprecedented amount of objective, individual-level data. Analyzing this data with ever-evolving artificial intelligence (AI) offers the possibility of intervening early with precision and could even prevent the most critical sentinel events.

Read More

White jigsaw puzzle as a human brain on blue. Concept for Alzheimer's disease.

Detecting Dementia

Cross-posted, with slight modification, from Harvard Law Today, where it originally appeared on November 21, 2020. 

By Chloe Reichel

Experts gathered last month to discuss the ethical, social, and legal implications of technological advancements that facilitate the early detection of dementia.

“Detecting Dementia: Technology, Access, and the Law,” was hosted on Nov. 16 as part of the Project on Law and Applied Neuroscience, a collaboration between the Center for Law, Brain and Behavior at Massachusetts General Hospital and the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

The event, organized by Francis X. Shen ’06 Ph.D. ’08, the Petrie-Flom Center’s senior fellow in Law and Applied Neuroscience and executive director of the Center for Law, Brain and Behavior at Massachusetts General Hospital, was one of a series hosted by the Project on Law and Applied Neuroscience on aging brains.

Early detection of dementia is a hopeful prospect for the treatment of patients, both because it may facilitate early medical intervention, as well as more robust advance care planning.

Read More

AI concept art.

Health Care AI in Pandemic Times

By Jenna Becker

The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.

Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review. 

This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.

Read More

AI concept art.

AI’s Legitimate Interest: Video Preview with Charlotte Tschider

The Health Law Policy, Bioethics, and Biotechnology Workshop provides a forum for discussion of new scholarship in these fields from the world’s leading experts.

The workshop is led by Professor I. Glenn Cohen, and presenters come from a wide range of disciplines and departments.

In this video, Charlotte Tschider gives a preview of her paper, “AI’s Legitimate Interest: Towards a Public Benefit Privacy Model,” which she will present at the Health Law Policy workshop on November 9, 2020. Watch the full video below:

computer and stethoscope

Is Real-World Health Algorithm Review Worth the Hassle?

By Jenna Becker

The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products. 

The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.

Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Insufficient Protections for Health Data Privacy: Lessons from Dinerstein v. Google

By Jenna Becker

A data privacy lawsuit against the University of Chicago Medical Center and Google was recently dismissed, demonstrating the difficulty of pursuing claims against hospitals that share patient data with tech companies.

Patient data sharing between health systems and large software companies is becoming increasingly common as these organizations chase the potential of artificial intelligence and machine learning in healthcare. However, many tech firms also own troves of consumer data, and these companies may be able to match up “de-identified” patient records with a patient’s identity.

Scholars, privacy advocates, and lawmakers have argued that HIPAA is inadequate in the current landscape. Dinerstein v. Google is a clear reminder that both HIPAA and contract law are insufficient for handling these types of privacy violations. Patients are left seemingly defenseless against their most personal information being shared without their meaningful consent.

Read More

a home hub featuring icons of all the tasks it can assist with in a thinking cloud

Exploring Elder Care Robotics: Voice Assistants and Home Hubs

This article is part of a four-part series that researches how robotics are being developed for aging care and investigates their ethical implications. In our first article, we explored emotional companion robots, which soothe and comfort patients experiencing loneliness, depression, or diseases such as Alzheimer’s. Today, we look at voice assistants and home hubs—robots designed to coordinate and simplify daily tasks around the house. 

What are Voice Assistants and Home Hubs?

Unlike other robots in this series, you are probably familiar with voice assistants and home hubs. These robots, which include Amazon Echo, Google Home, Apple Siri, Samsung Ballie, and Nest, respond to human commands (voice, motion, or input) to complete tasks like preheating the oven, playing a podcast, or refilling a prescription. Several devices also incorporate artificial intelligence (AI) to learn household patterns and anticipate needs.  However, unlike social robots (covered later in this series), voice assistants do not proactively engage with users unless programmed or commanded.   Read More

Illustration of a person running away carrying "stolen" 1's and 0's

Measuring Health Privacy – Part II

This piece was part of a symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. The symposium was originally posted through the Northeastern University Law Review Online Forum.

Promises and Perils of Emerging Health Innovations Blog Symposium

We are pleased to present this symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. As a note, additional detailed analyses of issues discussed during the conference will be published in the 2021 Winter Issue of the Northeastern University Law Review.

Throughout the two-day conference, speakers and attendees discussed how innovations, including artificial intelligence, robotics, mobile technology, gene therapies, pharmaceuticals, big data analytics, tele- and virtual health care delivery, and new models of delivery, such as accountable care organizations (ACOs), retail clinics, and medical-legal partnerships (MLPs), have entered and changed the healthcare market. More dramatic innovations and market disruptions are likely in the years to come. These new technologies and market disruptions offer immense promise to advance health care quality and efficiency, as well as improve provider and patient engagement. Success will depend, however, on careful consideration of potential perils and well-planned interventions to ensure new methods ultimately further, rather than diminish, the health of patients, especially those who are the most vulnerable.

In this two-part post for the Promises and Perils of Emerging Health Innovations blog symposium Ignacio Cofone engages in a discussion centered on the importance of addressing patients’ concerns when introducing new health technologies. While privacy risks may not always be avoided altogether, Cofone posits that privacy risks (and their potential costs) should be weighed against any and all health benefits innovative technology and treatments may have. To do so, Cofone introduces the concept of using health economics and a Quality-Adjusted Life Year (QALY) framework as a way to evaluate the weight and significance of the costs and benefits related to health technologies that may raise patient privacy concerns.

Measuring Health Privacy – Part II

By Ignacio N. Cofone

Read More

Illustration of cascading 1's and 0's, blue text on a black background

Measuring Health Privacy – Part I

This piece was part of a symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. The symposium was originally posted through the Northeastern University Law Review Online Forum.

Promises and Perils of Emerging Health Innovations Blog Symposium

We are pleased to present this symposium featuring commentary from participants in the Center for Health Policy and Law’s annual conference, Promises and Perils of Emerging Health Innovations, held on April 11-12, 2019 at Northeastern University School of Law. As a note, additional detailed analyses of issues discussed during the conference will be published in the 2021 Winter Issue of the Northeastern University Law Review.

Throughout the two-day conference, speakers and attendees discussed how innovations, including artificial intelligence, robotics, mobile technology, gene therapies, pharmaceuticals, big data analytics, tele- and virtual health care delivery, and new models of delivery, such as accountable care organizations (ACOs), retail clinics, and medical-legal partnerships (MLPs), have entered and changed the healthcare market. More dramatic innovations and market disruptions are likely in the years to come. These new technologies and market disruptions offer immense promise to advance health care quality and efficiency, and improve provider and patient engagement. Success will depend, however, on careful consideration of potential perils and well-planned interventions to ensure new methods ultimately further, rather than diminish, the health of patients, especially those who are the most vulnerable.

In this two-part post for the Promises and Perils of Emerging Health Innovations blog symposium, Ignacio Cofone engages in a discussion centered on the importance of addressing patients’ concerns when introducing new health technologies. While privacy risks may not always be avoided altogether, Cofone posits that privacy risks (and their potential costs) should be weighed against all health benefits innovative technology and treatments may have. To do so, Cofone introduces the concept of using health economics and a Quality-Adjusted Life Year (QALY) framework to evaluate the weight and significance of the costs and benefits related to health technologies that may raise patient privacy concerns.

Measuring Health Privacy – Part I

Read More

Close up of a computer screen displaying code

What Google Isn’t Saying About Your Health Records

By Adrian Gropper

Google’s semi-secret deal with Ascension is testing the limits of HIPAA as society grapples with the future impact of machine learning and artificial intelligence.

I. Glenn Cohen points out that HIPAA may not be keeping up with our methods of consent by patients and society on the ways personal data is used. Is prior consent, particularly consent from vulnerable patients seeking care, a good way to regulate secret commercial deals with their caregivers? The answer to a question is strongly influenced by how you ask the questions.

Read More