AI concept art.

A Closer Look at FDA’s Newly Released AI/ML Action Plan

By Vrushab Gowda

The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.

Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.

The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

AI concept art.

Health Care AI in Pandemic Times

By Jenna Becker

The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.

Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review. 

This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.

Read More

AI concept art.

AI’s Legitimate Interest: Video Preview with Charlotte Tschider

The Health Law Policy, Bioethics, and Biotechnology Workshop provides a forum for discussion of new scholarship in these fields from the world’s leading experts.

The workshop is led by Professor I. Glenn Cohen, and presenters come from a wide range of disciplines and departments.

In this video, Charlotte Tschider gives a preview of her paper, “AI’s Legitimate Interest: Towards a Public Benefit Privacy Model,” which she will present at the Health Law Policy workshop on November 9, 2020. Watch the full video below:

computer and stethoscope

Is Real-World Health Algorithm Review Worth the Hassle?

By Jenna Becker

The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products. 

The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.

Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More

Robot and human facing each other. silhouetted against lit background

Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines?

By Sonia Sethi

Do you say “thank you” to Alexa (or your preferred AI assistant)?

A rapid polling among my social media revealed that out of 76 participants, 51 percent thank their artificial intelligence (AI) assistant some or every time. When asked why/why not people express thanks, a myriad of interesting—albeit, entertaining—responses were received. There were common themes: saying thanks because it’s polite or it’s a habit, not saying thanks because “it’s just a database and not a human,” and the ever-present paranoia of a robot apocalypse.

But do you owe Alexa your politeness? Do you owe any moral consideration whatsoever? Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

What are Our Duties and Moral Responsibilities Toward Humans when Constructing AI?

Much of what we fear about artificial intelligence comes down to our underlying values and perception about life itself, as well as the place of the human in that life. The New Yorker cover last week was a telling example of the kind of dystopic societies we claim we wish to avoid.

I say “claim” not accidently, for in some respects the nascent stages of such a society do already exist; and perhaps they have existed for longer than we realize or care to admit. Regimes of power, what Michel Foucault called biopolitics, are embedded in our social institutions and in the mechanisms, technologies, and strategies by which human life is managed in the modern world. Accordingly, this arrangement could be positive, neutral, or nefarious—for it all depends on whether or not these institutions are used to subjugate (e.g. racism) or liberate (e.g. rights) the human being; whether they infringe upon the sovereignty of the individual or uphold the sovereignty of the state and the rule of law; in short, biopower is the impact of political power on all domains of human life. This is all the more pronounced today in the extent to which technological advances have enabled biopower to stretch beyond the political to almost all facets of daily life in the modern world. Read More