Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

AI transparency

AI/ML-based software is often opaque. When an algorithm makes a decision or provides a recommendation, the reasoning behind that action is often unclear. There are many reasons for this opacity. The software may learn and change over time, quickly nullifying algorithm explanations. The algorithm may not be understandable, based on its complexity. AI transparency may lead to privacy or security breaches, disincentivizing vendors from providing explanations. Or, a software vendor may merely be attempting to protect its IP.

These factors pose unique regulatory challenges to the FDA, which maintains regulatory authority over Software as a Medical Device (SaMD). 

Although the FDA can regulate AI/ML-based SaMD, the agency has not yet finalized its regulatory framework. In January, the FDA released an updated action plan for AI/ML-based devices. One of its stated goals is to “promote a patient-centered approach to AI/ML-based technologies based on transparency to users.”

Many consider AI explainability crucial to avoiding algorithmic bias. This is not unique to health care. Understanding the basis of an algorithm and the data used to train it can help identify discrimination and bias in algorithmic decision-making.

To promote transparency, the FDA intends to promulgate labeling requirements for AI/ML-based devices. This year, the FDA plans to gather feedback on the types of information that should be included in such a label. This could include the algorithm’s training data, inputs, logic, intended use, and performance testing results. 

Such transparency would certainly aid an agency like the FDA in regulating AI/ML-based devices. The FDA cannot validate the equitable representation of patient populations in training data without some level of access to that data. The FDA cannot detect inherently-biased algorithm factors without access to those factors. And the FDA cannot ensure that AI-based recommendations lack bias without access to algorithm testing results. 

The FDA’s ability to effectively validate health care AI hinges on some level of transparency. However, the FDA’s focus on “transparency to users” may be misplaced. 

Will algorithmic transparency build trust in health AI?

Yes, transparency in the hands of a regulator is valuable. But does this information provide value to a physician? When an AI-based recommendation pops up on a provider’s screen, they likely will lack the time and ability to sort through a detailed label.

Perhaps this transparency would help establish initial trust in algorithm output among providers. Before initially using a product, a provider may choose to investigate a product’s label and determine whether they trust its output. However, individual providers may not be well-equipped to evaluate these labels without additional training. 

In the hands of patients, this information will likely prove even less useful. Patient-facing health care AI is becoming increasingly common. Without training in either medicine or technology, a patient has little use for detailed information about training data or algorithm factors. The millions of people downloading AI-based digital health apps are unlikely to review device labels in any detail. 

Of course, it is possible that AI transparency to users might help establish trust in health AI. Providers, patients, and members of the public may choose to validate health AI on their own. Such review is certainly warranted. Health care AI is biased, and FDA’s rollout of AI oversight has been slow.

However, it seems likely that effective regulation will do much more to improve trust in these devices than algorithm explanations to users. The FDA is better equipped to evaluate the efficacy of AI/ML-based devices than providers and patients. 

Rather than focusing on guidelines for user-facing transparency, the FDA should focus on finalizing its regulation of AI/ML-based devices. Perhaps then trust in health care AI will start growing.

Jenna Becker

Jenna Becker is a 2L at Harvard Law School with a background in healthcare software.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.