By Jenna Becker
The U.S. Food and Drug Administration (FDA) should not delay their plans to regulate clinical algorithms, despite challenges associated with reviewing the real-world performance of these products.
The FDA Software Pre-Certification (Pre-Cert) Pilot Program was designed to provide “streamlined and efficient” regulatory oversight of Software as a Medical Device (SaMD) — software products that are regulable by the FDA as a medical device. The Pre-Cert program, in its pilot phase, is intended to inform the development of a future SaMD regulatory model.
Last month, the FDA released an update on Pre-Cert, highlighting lessons learned from pilot testing and next steps for developing the program. One key lesson learned was the difficulty in identifying and obtaining the real-world performance data needed to analyze the clinical effectiveness of SaMDs in practice. Although this challenge will be difficult to overcome in the near future, the FDA’s plans to regulate should not be slowed by insufficient postmarket data.
Why use real-world performance review?
After approving a product for market via “streamlined review,” the FDA plans to regularly review SaMD usage in the real world to verify safety, effectiveness, and performance.
Real-world performance review is important for a few reasons. Premarket SaMD review doesn’t cover updates or algorithm re-training. But software must be updatable. Vendors must be able to address bugs. They also need to be able to adjust algorithms based on user experience feedback. Further, some algorithms may be retrained manually or automatically update, learning as they are used in practice. Premarket review is not well-equipped to handle these sorts of updates.
Real-world performance data also allows the FDA to analyze whether the software works well in practice. Vendors can use this data to regularly update algorithms, improving SaMD efficacy. Although an algorithm may have been validated by a peer-reviewed study, it may not be accurate when widely deployed. For example, a recent study found that the majority of deep learning clinical algorithms are trained on data from only three states. These types of oversights in model development can impact algorithmic accuracy.
Roadblocks to real-world review
There are a couple of challenges to implementing real-world performance review for clinical outcomes.
First, real-world performance data for health outcomes is not easily accessible. The FDA noted in their Pre-Cert update that it is continuing to explore automated methods to obtain clinical data from external sources.
This is a technologically challenging problem to solve: clinical outcome data is often found in electronic medical records, which are not known for their interoperability. The FDA is currently developing the National Evaluation System for health Technology (NEST), an evaluation system that will aggregate and analyze data from sources like electronic health records and medical billing claims. Unfortunately, NEST has not released a timetable for a broad implementation.
Second, the FDA runs into challenges identifying the metrics needed to validate SaMD benefits. The FDA’s Pre-Cert update conceded that some metrics, like downstream health benefits of SaMD usage, are difficult to observe and quantify. Without clear metrics, postmarket clinical review will not provide value.
A path forward
A key benefit of real-world clinical performance review is to address changes to an algorithm. The FDA is already planning on issuing procedures for vendors who plan to update their SaMD. Such change control mechanisms may require updated testing when deploying an algorithm update.
Tracking the efficacy of “adaptive” algorithms is significantly more complex, as they will continue to change as they are used. Adaptive clinical algorithms should not be permitted until an effective postmarket review process is solidified.
Software update procedures, however, do not address concerns of an algorithm not working in practice. But aspects of the Pre-Cert plan do address premarket testing. The FDA could strengthen premarket requirements to ensure broader testing of SaMD before deployment. Unfortunately, this could cut against the “streamlined and efficient” goal of Pre-Cert.
Real-world performance data is important, especially for handling SaMD updates. Obtaining this data for postmarket review should certainly continue to be a goal for the FDA. But the challenges of completing postmarket review should not slow the FDA’s plans to regulate clinical algorithms. With an influx of new clinical algorithms, the FDA should prioritize the aspects of their regulatory model that they can accomplish sooner rather than later.