Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More

empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

Read More

Motherboard, Reverse Detail: This is a green motherboard, photographed with red-gelled flashes.

The Future of Race-Based Clinical Algorithms

By Jenna Becker

Race-based clinical algorithms are widely used. Yet many race-based adjustments lack evidence and worsen racism in health care. 

Prominent politicians have called for research into the use of race-based algorithms in clinical care as part of a larger effort to understand the public health impacts of structural racism. Physicians and researchers have called for an urgent reconsideration of the use of race in these algorithms. 

Efforts to remove race-based algorithms from practice have thus far been piecemeal. Medical associations, health systems, and policymakers must work in tandem to rapidly identify and remove racist algorithms from clinical practice.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

computer and stethoscope

Is Real-World Health Algorithm Review Worth the Hassle?

By Jenna Becker

The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products. 

The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.

Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.

Read More

Blue background that reads "facebook" with a silhouette of a person looking down on his phone in front

On Social Suicide Prevention, Don’t Let the Perfect be the Enemy of the Good

In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.

But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist. Read More

Of Algorithms, Algometry, and Others: Pain Measurement & The Quantification of Distrust

By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law

Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.

Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.

Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)

Read More