Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

Read More

Blue biohazard sign in front of columns of binary code.

The International Weaponization of Health Data

By Matthew Chun

International collaboration through the sharing of health data is crucial for advancing human health. But it also comes with risks — risks that countries around the world seem increasingly unwilling to take.

On the one hand, the international sharing of health-related data sets has paved the way for important advances such as mapping the human genome, tracking global health outcomes, and fighting the rise of multidrug-resistant superbugs. On the other hand, it can pose serious risks for a nation’s citizens, including re-identification, exploitation of genetic vulnerabilities by foreign parties, and unauthorized data usage. As countries aim to strike a difficult balance between furthering research and protecting national interests, recent trends indicate a shift toward tighter controls that could chill international collaborations.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

hands hold phone with app heart and activity on screen over table in office

Perspectives on Data Privacy for Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Direct-to-consumer (DTC) health apps, such as apps that manage our diet, fitness, and sleep, are becoming ubiquitous in our digital world.

These apps provide a window into some of the key issues in the world of digital health — including data privacy, data access, data ownership, bias, and the regulation of health technology.

To better understand these issues, and ways forward, we contacted key stakeholders representing a range of perspectives in the field of digital health for their brief answers to five questions about DTC health apps.

Read More

3d rendering of a robot working with virtual display.

Artificial Intelligence and Health Law: Updates from England

By John Tingle

Artificial intelligence (AI) is making an impact on health law in England.

The growing presence of AI in law has been chronicled by organizations such as the Law Society, which published a forward-thinking, horizon-scanning paper on artificial intelligence and the legal profession back in 2018.

The report identifies several key emerging strands of AI development and use, including Q&A chatbots, document analysis, document delivery, legal adviser support, case outcome prediction, and clinical negligence analysis. These applications of AI already show promise: one algorithm developed by researchers at University College London, the University of Sheffield, and the University of Pennsylvania was able to predict case outcomes with 79% accuracy.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More

empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

Read More

Motherboard, Reverse Detail: This is a green motherboard, photographed with red-gelled flashes.

The Future of Race-Based Clinical Algorithms

By Jenna Becker

Race-based clinical algorithms are widely used. Yet many race-based adjustments lack evidence and worsen racism in health care. 

Prominent politicians have called for research into the use of race-based algorithms in clinical care as part of a larger effort to understand the public health impacts of structural racism. Physicians and researchers have called for an urgent reconsideration of the use of race in these algorithms. 

Efforts to remove race-based algorithms from practice have thus far been piecemeal. Medical associations, health systems, and policymakers must work in tandem to rapidly identify and remove racist algorithms from clinical practice.

Read More

Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

Read More