Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More

empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

Read More

AI concept art.

A Closer Look at FDA’s Newly Released AI/ML Action Plan

By Vrushab Gowda

The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.

Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.

The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

AI concept art.

Health Care AI in Pandemic Times

By Jenna Becker

The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.

Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review. 

This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.

Read More

AI concept art.

AI’s Legitimate Interest: Video Preview with Charlotte Tschider

The Health Law Policy, Bioethics, and Biotechnology Workshop provides a forum for discussion of new scholarship in these fields from the world’s leading experts.

The workshop is led by Professor I. Glenn Cohen, and presenters come from a wide range of disciplines and departments.

In this video, Charlotte Tschider gives a preview of her paper, “AI’s Legitimate Interest: Towards a Public Benefit Privacy Model,” which she will present at the Health Law Policy workshop on November 9, 2020. Watch the full video below:

computer and stethoscope

Is Real-World Health Algorithm Review Worth the Hassle?

By Jenna Becker

The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products. 

The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.

Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.

Read More