Machine Learning in Medicine: Addressing Ethical Challenges

Machine learning in medicine is accelerating at an incredible rate, bringing a new era of ethical and regulatory challenges to the clinic.

In a new paper published in PLOS Medicine, Effy Vayena, Alessandro Blasimme, and I. Glenn Cohen spell out these ethical challenges and offer suggestions for how Institutional Review Boards (IRBs), medical practitioners, and developers can ethically deploy machine learning in medicine (MLm). Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

concept of artificial intelligence, human brain with machinery

Four Roles for Artificial Intelligence in the Medical System

How will artificial intelligence (AI) change medicine?

AI, powered by “big data” in health, promises to transform medical practice, but specifics remain inchoate.  Reports that AI performs certain tasks at the level of specialists stoke worries that AI will “replace” physicians.  These worries are probably overblown; AI is unlikely to replace many physicians in the foreseeable future.  A more productive set of questions considers how AI and physicians should interact, including how AI can improve the care physicians deliver, how AI can best enable physicians to focus on the patient relationship, and how physicians should review the recommendations and predictions of AI.  Answering those questions requires clarity about the larger function of AI: not just what tasks AI can do or how it can do them, but what role it will play in the context of physicians, other patients, and providers within the overall medical system.

Medical AI can improve care for patients and improve the practice of medicine for providers—as long as its development is supported by an understanding of what role it can and should play.

Four different roles each have the possibility to be transformative for providers and patients: AI can push the frontiers of medicine; it can replicate and democratize medical expertise; it can automate medical drudgery; and it can allocate medical resources.

Read More