By Vrushab Gowda
The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.
Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.
The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.
Lessons from the Past
The Action Plan builds upon a framework originally described in an April 2019 Discussion Paper. Namely, it incorporates feedback on the paper’s proposed “Predetermined Change Plan” for continuously learning algorithms.
By contrast to the conventional routes of FDA premarket review, the “Change Plan” offers a total product lifecycle (TPLC) approach to AI regulation. This would consist of two components: SaMD Pre-Specifications (SPS) and the Algorithm Change Protocol (ACP). In the former, developers anticipate their product’s modifications in real-time, disclosing this to the FDA in advance of marketing. The latter “protocol” refers to a sophisticated risk management strategy related to said modifications. The FDA would incorporate this information to stratify these products based on the degree of patient risk exposure, developer oversight, and changes to intended use.
Dissecting the Action Plan
The Action Plan synthesized stakeholder input on the Discussion Paper (obtained from a number of public meetings across 2020) to develop a five-pronged strategy, each of which will be discussed in turn.
It first aims to (1) provide a “tailored regulatory framework for AI/ML-based SaMD.” The FDA intends to accomplish this by way of a draft guidance document to be issued later in 2021, which would specify the content of a complete SPS/ACP submission, provide examples of appropriate modifications, and offer additional clarity on the review process itself.
It also seeks to (2) “encourage harmonization of Good Machine Learning Practice (GMLP) development” in the vein of its Current Good Manufacturing Practices (CGMP) for pharmaceutical products. To this end, the Agency is cultivating relationships with a host of third-party certification fora, such as the International Medical Device Regulators Forum (IMDRF), the International Organization for Standardization (ISO), and the Institute of Electrical and Electronics Engineers (IEEE).
In (3) adopting a “patient-centered approach incorporating transparency to users,” the FDA plans to mitigate some ramifications of AI devices’ inherent opacity. It will host a workshop soliciting public input on labeling, expanding the findings of an October 2020 Patient Engagement Advisory Committee meeting on AI/ML.
The Agency will additionally (4) foster “regulatory science methods related to algorithm bias & robustness,” largely though academic partnerships to promote ethnic, racial, and socioeconomic diversity in both training datasets and target populations.
Lastly, FDA intends to (5) support voluntary pilot programs to incorporate real world performance monitoring. Its description in the Action Plan would seem to suggest a more muscular AI/ML variant of the Pre-Cert program, which would include close post-market surveillance.
Avoiding Future Pitfalls
The Action Plan represents a major shift in the FDA’s treatment of AI products.
For a blueprint which effectively outlines the regulatory future of an entire product category, however, it remains an eight-page document relatively sparse on detail. It presents an indeterminate timeline; the announced guidance on Predetermined Change Control is the only feature associated with a discrete timeframe – and only “2021” at that. Approximately when stakeholders can expect to provide input on AI transparency concerns, receive information on GMLP specifications, and enroll in the proposed pilot program remains to be determined.
This is no mere technical point – it directly bears upon stakeholders’ research, development, and commercialization strategies. The pilot program itself may prove a controversial point. FDA can expect to face substantial industry resistance to real-world performance monitoring, and has acknowledged that statutory authority is a likely prerequisite for any expansion beyond the demonstration phase.
Further, the Action Plan centers its focus on software as medical device products, but does not indicate a regulatory pathway for software in medical devices, such as AI algorithms embedded within insulin pumps and wearables currently under development.
While offering more granularity on the TPLC approach established in the preceding Discussion Paper, the Action Plan leaves many key questions unanswered. These should be addressed by the FDA in subsequent guidance documents, public workshops, and press releases, each accompanied by more tangible timelines. Nevertheless, the Action Plan represents a significant step in the right direction and promises to finally bring a semblance of clarity over a long murky field of regulation.