computer and stethoscope

Is Real-World Health Algorithm Review Worth the Hassle?

By Jenna Becker

The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products. 

The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.

Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.

Read More

Blue background that reads "facebook" with a silhouette of a person looking down on his phone in front

On Social Suicide Prevention, Don’t Let the Perfect be the Enemy of the Good

In a piece in The Guardian and a forthcoming article in the Yale Journal of Law and Technology, Bill of Health contributor Mason Marks recently argued that Facebook’s suicide prediction algorithm is dangerous and ought to be subject to rigorous regulation and transparency requirements. Some of his suggestions (in particular calls for more data and suggestions that are really more about how we treat potentially suicidal people than about how we identify them) are powerful and unobjectionable.

But Marks’s core argument—that unless Facebook’s suicide prediction algorithm is subject to the regulatory regime of medicine and operated on an opt-in basis it is morally problematic—is misguided and alarmist. Read More

Of Algorithms, Algometry, and Others: Pain Measurement & The Quantification of Distrust

By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law

Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.

Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.

Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)

Read More