Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

hands hold phone with app heart and activity on screen over table in office

Perspectives on Data Privacy for Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Direct-to-consumer (DTC) health apps, such as apps that manage our diet, fitness, and sleep, are becoming ubiquitous in our digital world.

These apps provide a window into some of the key issues in the world of digital health — including data privacy, data access, data ownership, bias, and the regulation of health technology.

To better understand these issues, and ways forward, we contacted key stakeholders representing a range of perspectives in the field of digital health for their brief answers to five questions about DTC health apps.

Read More

3d rendering of a robot working with virtual display.

Artificial Intelligence and Health Law: Updates from England

By John Tingle

Artificial intelligence (AI) is making an impact on health law in England.

The growing presence of AI in law has been chronicled by organizations such as the Law Society, which published a forward-thinking, horizon-scanning paper on artificial intelligence and the legal profession back in 2018.

The report identifies several key emerging strands of AI development and use, including Q&A chatbots, document analysis, document delivery, legal adviser support, case outcome prediction, and clinical negligence analysis. These applications of AI already show promise: one algorithm developed by researchers at University College London, the University of Sheffield, and the University of Pennsylvania was able to predict case outcomes with 79% accuracy.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More

empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

Read More

Motherboard, Reverse Detail: This is a green motherboard, photographed with red-gelled flashes.

The Future of Race-Based Clinical Algorithms

By Jenna Becker

Race-based clinical algorithms are widely used. Yet many race-based adjustments lack evidence and worsen racism in health care. 

Prominent politicians have called for research into the use of race-based algorithms in clinical care as part of a larger effort to understand the public health impacts of structural racism. Physicians and researchers have called for an urgent reconsideration of the use of race in these algorithms. 

Efforts to remove race-based algorithms from practice have thus far been piecemeal. Medical associations, health systems, and policymakers must work in tandem to rapidly identify and remove racist algorithms from clinical practice.

Read More

Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

Read More

AI concept art.

A Closer Look at FDA’s Newly Released AI/ML Action Plan

By Vrushab Gowda

The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.

Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.

The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.

Read More

lady justice.

Computational Psychiatry for Precision Sentencing in Criminal Law

By Francis X. Shen

A core failing of the criminal justice system is its inability to individualize criminal sentences and tailor probation and parole to meet the unique profile of each offender.

As legal scholar, and now federal judge Stephanos Bibas has observed, “All too often … sentencing guidelines and statutes act as sledgehammers rather than scalpels.”

As a result, dangerous offenders may be released, while offenders who pose little risk to society are left behind bars. And recidivism is common — the U.S. has an astounding recidivism rate of 80% — in part because the current criminal justice system largely fails to address mental health challenges, which are heavily over-represented in the justice system.

Advances in computational psychiatry, such as the deep phenotyping methods explored in this symposium, offer clinicians newfound abilities to practice precision psychiatry. The idea behind precision psychiatry is both simple and elusive: treat individuals as individuals. Yet advancing such a program in practice is “very ambitious” because no two individual brains — and the experiences those brains have had over a lifetime — are the same.

Deep phenotyping offers the criminal justice system the tools to improve public safety, identify low-risk offenders, and modify decision-making to reduce recidivism. Computational psychiatry can lead to what can be described as precision sentencing.

Read More