By Jenna Becker
The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.
Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review.
This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.
COVID-19 AI Usage and Peer Review
The use of AI to assist with triaging patients during the pandemic is understandable. Especially early on in the pandemic, health care organizations would want to use any available insight in the face of a poorly understood illness. Predictive algorithms may have been able to find patterns much more quickly than clinicians or researchers. These tools could also help health care organizations manage ballooning admission rates by quickly triaging patients who need immediate attention or elevated care.
However, early adoption of COVID-19 predictive algorithms may have reduced incentives for AI developers to publicly share their testing results. According to a recent STAT article, some major vendors have not released performance data despite months of widespread use.
Some AI developers have published testing results. But an April study of published COVID-19 models indicates a high risk of bias and overestimated accuracy. Thus, peer review alone may be insufficient to guarantee algorithmic accuracy.
Health Care AI Regulatory Oversight
Regulatory oversight is, similarly, not a straightforward solution. Requiring full FDA approval of predictive models would significantly slow their release — a significant hindrance during a pandemic. However, the FDA may issue Emergency Use Authorizations, speeding up this process during public health emergencies.
Although a few COVID-19 predictive algorithms have received emergency approval, the FDA does not appear to require authorizations for such products. The FDA’s health care AI regulatory plans are still in their early stages. The agency plans to focus its regulatory efforts on high-risk AI, like software used to treat or directly diagnose disease.
Further, the FDA has no plans to regulate transparent clinical decision support algorithms, which allow providers to independently review the bases of the prediction. This “transparency” can be illusory. Clinicians may not have the statistical or epidemiological expertise required to fully understand the basis of certain models. This is even more apparent in the case of a novel illness like COVID-19, where the factors that drive these models may not be understood even by experts. This may incentivize providers to blindly rely on an algorithm to treat a poorly understood illness.
Impact of Rapid AI Adoption
The COVID-19 pandemic has clearly demonstrated the benefits of artificial intelligence in health care settings. These predictive models can help alleviate strain on limited hospital resources. They may also provide insights into patient conditions that are not recognizable by a clinician.
To address bias and accuracy concerns, AI developers should share algorithm performance data, and implementing health care organizations should regularly validate predictive accuracy. However, with minimal regulatory requirements and low incentives for thorough peer review, there is not a clear mechanism to enforce AI oversight.
A recent Intel survey indicates a growing acceptance of and trust in AI by health care leaders. With the precedents set by the rapid adoption of COVID-19 predictive algorithms, this trust may not be well-earned.