Artificial Intelligence Plus Data Democratization Requires New Health Care Framework

By Michael L. Millenson

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Read More

AI-generated image of robot doctor with surgical mask on.

Who’s Liable for Bad Medical Advice in the Age of ChatGPT?

By Matthew Chun

By now, everyone’s heard of ChatGPT — an artificial intelligence (AI) system by OpenAI that has captivated the world with its ability to process and generate humanlike text in various domains. In the field of medicine, ChatGPT already has been reported to ace the U.S. medical licensing exam, diagnose illnesses, and even outshine human doctors on measures of perceived empathy, raising many questions about how AI will reshape health care as we know it.

But what happens when AI gets things wrong? What are the risks of using generative AI systems like ChatGPT in medical practice, and who is ultimately held responsible for patient harm? This blog post will examine the liability risks for health care providers and AI providers alike as ChatGPT and similar AI models increasingly are used for medical applications.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Governing Health Data for Research, Development, and Innovation: The Missteps of the European Health Data Space Proposal

By Enrique Santamaría

Together with the Data Governance Act (DGA) and the General Data Protection Regulation (GDPR), the proposal for a Regulation on the European Health Data Space (EHDS) will most likely form the new regulatory and governance framework for the use of health data in the European Union. Although well intentioned and thoroughly needed, there are aspects of the EHDS that require further debate, reconsiderations, and amendments. Clarity about what constitutes scientific research is particularly needed.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

Read More

Blue biohazard sign in front of columns of binary code.

The International Weaponization of Health Data

By Matthew Chun

International collaboration through the sharing of health data is crucial for advancing human health. But it also comes with risks — risks that countries around the world seem increasingly unwilling to take.

On the one hand, the international sharing of health-related data sets has paved the way for important advances such as mapping the human genome, tracking global health outcomes, and fighting the rise of multidrug-resistant superbugs. On the other hand, it can pose serious risks for a nation’s citizens, including re-identification, exploitation of genetic vulnerabilities by foreign parties, and unauthorized data usage. As countries aim to strike a difficult balance between furthering research and protecting national interests, recent trends indicate a shift toward tighter controls that could chill international collaborations.

Read More

BETHESDA, MD - JUNE 29, 2019: NIH NATIONAL INSTITUTES OF HEALTH sign emblem seal on gateway center entrance building at NIH campus. The NIH is the US's medical research agency.

Will NIH Learn from Myriad when Settling Its mRNA Inventorship Dispute with Moderna?

By Jorge L. Contreras

The National Institutes of Health (NIH) is currently embroiled in a dispute over the ownership of patent rights to Moderna’s flagship mRNA COVID-19 vaccine (mRNA-1273).

The NIH, which funded much of Moderna’s research on the COVID-19 vaccine, should be assertive in exerting control over the results of this taxpayer-funded research. Failing to do so would be a missed opportunity for the public sector to have a say in the distribution and pricing of this critical medical technology.

Read More

Concept illustration of DNA and genes.

The Civil Rights Challenge to Gene Patenting

By Jorge L. Contreras

In 2009, the American Civil Liberties Union (ACLU) launched a unique lawsuit against Myriad Genetics, challenging fifteen claims of seven patents covering various aspects of the BRCA1/2 genes and their use in diagnosing risk for breast and ovarian cancer. In mounting this case, the ACLU assembled a coalition of lawyers, scientists, counselors, patients and advocates in an unprecedented challenge not only to one company’s patents, but the entire practice of gene patenting in America. And, against the odds, they won. In 2013, the U.S. Supreme Court ruled in Association for Molecular Pathology v. Myriad Genetics that naturally occurring DNA sequences are not patentable, a ruling that has had repercussions throughout the scientific community and the biotechnology industry.

In The Genome Defense: Inside the Epic Legal Battle to Determine Who Owns Your DNA (New York: Hachette/Algonquin, 2021), I describe the long road that led to this unlikely Supreme Court victory. It began in 2003 when the ACLU hired its first science advisor, a Berkeley-based cellist and non-profit organizer named Tania Simoncelli. At the ACLU, Simoncelli’s job was to identify science-related issues that the ACLU could do something about, from DNA fingerprinting to functional MRI brain imaging. A couple of years into the role, Simoncelli mentioned gene patenting to Chris Hansen, a veteran ACLU litigator who had been involved in cases covering mental health to school desegregation to online porn. At first, Hansen didn’t believe her. How could a company patent something inside the human body? But Simoncelli persisted, showing him articles and statistics demonstrating that, by 2005, more than 20% of the human genome was covered by patents. The realization led to Hansen’s oft-quoted exclamation, “Who can we sue?”

Read More

illustration of person tracking his health condition with smart bracelet, mobile application and cloud services.

Should We Regulate Direct-to-Consumer Health Apps?

By Sara Gerke and Chloe Reichel

According to one estimate, over 318,000 health apps are available in app stores, and over 200 health apps are added each day. Of these, only a fraction are regulated by the U.S. Food and Drug Administration (FDA); those classified as “medical devices,” which typically pose a moderate to high risk to user safety.

In this final installment of our In Focus Series on Direct-to-Consumer Health Apps, we asked our respondents to reflect on this largely unregulated space in health tech.

Specifically, we asked: How can/should regulators deal with the assessment of health apps? For apps not currently regulated by the FDA, should they undergo any kind of review, such as whether they are helpful for consumers?

Read their answers below, and explore the following links for their responses to other questions in the series.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

Hand holding smartphone with colorful app icons concept.

Who Owns the Data Collected by Direct-to-Consumer Health Apps?

By Sara Gerke and Chloe Reichel

Who owns the data that are collected via direct-to-consumer (DTC) health apps? Who should own that data?

We asked our respondents to answer these questions in the third installment of our In Focus Series on Direct-to-Consumer Health Apps. Learn about the respondents and their views on data privacy concerns in the first installment of this series, and read their thoughts on consumer access to DTC health app data in the second installment.

Read More