By Jenna Becker
The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.
Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review.
This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.