Machine learning in medicine is accelerating at an incredible rate, bringing a new era of ethical and regulatory challenges to the clinic.
In a new paper published in PLOS Medicine, Effy Vayena, Alessandro Blasimme, and I. Glenn Cohen spell out these ethical challenges and offer suggestions for how Institutional Review Boards (IRBs), medical practitioners, and developers can ethically deploy machine learning in medicine (MLm).
The authors illustrate in “Machine learning in medicine: Ethical challenges and how to move forward,” the many concerns about MLm algorithms’ entry into practice, including data sourcing and data protection, transparency and accountability, and the development and deployment of predictive algorithms, which could exacerbate biases and undermine patient autonomy, among other ethical difficulties.
Guidance from regulators, with input from medical practitioners and developers, is key to addressing these issues, and the paper identifies a number of ethical and legal questions that the authors suggest regulatory guidance could provide leadership on, including:
- The disclosure of basic, yet meaningful, details about MLm-based patient assessments and treatment suggestions.
- The grounds for liability for adverse events related to the use of MLm.
- Consent for use of patient data in accordance with the local regulations of where the data originated.
- Best practices for minimizing the introduction of bias and mitigating downstream effects.
“Standards—both ethical and technical — are key to fulfilling these aims,” caution the authors. “Regulators must develop standard procedures, including effective post-marketing monitoring mechanisms.”