By Jenna Becker
To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency.
But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.