Artificial Intelligence, Medical Malpractice, and the End of Defensive Medicine

By Shailin Thomas

Artificial intelligence and machine-learning algorithms are the centerpieces of many exciting technologies currently in development. From self-driving Teslas to in-home assistants such as Amazon’s Alexa or Google Home, AI is swiftly becoming the hot new focus of the tech industry. Even those outside Silicon Valley have taken notice — Harvard’s Berkman Klein Center and the MIT Media Lab are collaborating on a $27 million fund to ensure that AI develops in an ethical, socially responsible way. One area in which machine learning and artificial intelligence are poised to make a substantial impact is health care diagnosis and decision-making. As Nicholson Price notes in his piece Black Box Medicine, Medicine “already does and increasingly will use the combination of large-scale high-quality datasets with sophisticated predictive algorithms to identify and use implicit, complex connections between multiple patient characteristics.” These connections will allow doctors to increase the precision and accuracy of their diagnoses and decisions, identifying and treating illnesses better than ever before.

As it improves, the introduction of AI to medical diagnosis and decision-making has the potential to greatly reduce the number of medical errors and misdiagnoses — and allow diagnosis based on physiological relationships we don’t even know exist. As Price notes, “a large, rich dataset and machine learning techniques enable many predictions based on complex connections between patient characteristics and expected treatment results without explicitly identifying or understanding those connections.” However, by shifting pieces of the decision-making process to an algorithm, increased reliance on artificial intelligence and machine learning could complicate potential malpractice claims when doctors pursue improper treatment as the result of an algorithm error. In it’s simplest form, the medical malpractice regime in the United States is a professional tort system that holds physicians liable when the care they provide to patients deviates from accepted standards so much as to constitute negligence or recklessness. The system has evolved around the conception of the physician as the trusted expert, and presumes for the most part that the diagnosing or treating physician is entirely responsible for her decisions — and thus responsible if the care provided is negligent or reckless.

But who should be responsible when a doctor provides erroneous care at the suggestion of an AI diagnostic tool? If the algorithm has a higher accuracy rate than the average doctor — as many soon will — it seems wrong to continue to place blame on the physician. Going with the algorithm’s suggestion would always be statistically the best option — so it’s hard to argue that a physician would be negligent in following the algorithm, even if it turns out to be wrong and the doctor ends up harming a patient.  As algorithms improve and doctors use them more for diagnosing and decision-making, the traditional malpractice notions of physician negligence and recklessness may become harder to apply.

Is this something about which we should be worried? Medical malpractice laws exist to protect patients, and as algorithms take on a larger role in the medical decision-making process, they will become a less viable means of policing diagnosis and treatment decisions. However, there is reason to believe this could potentially be a good thing.

Malpractice liability for medical decision-making introduces incentives that may not benefit individual patients and are bad for the health care system as a whole.  First, it’s unclear that medical malpractice laws have a protective effect for patients. A study recently performed by researchers at Northwestern suggests that strict malpractice liability laws don’t necessarily correlate with better outcomes for patients. The researchers found that post-operative patients in states where doctors faced an increased risk of malpractice claims or litigation were 22 percent more likely to become septic, 9 percent more likely to develop pneumonia, 15 percent more likely to suffer acute kidney failure, and 18 percent more likely to have gastrointestinal bleeding. Increased malpractice liability was directly correlated to significantly worse outcomes, suggesting that strict medical malpractice laws and heightened liability don’t necessarily influence treatment in ways that keep patients safer.

Second, and more importantly, using AI and machine-learning algorithms to take malpractice liability away from physicians may help the health care system as a whole by mitigating the as-yet-intractable problem of overspending on care. One of the negative side effects of increased malpractice liability for physicians is that it leads to the practice of defensive medicine. In order to avoid potential lawsuits, risk-averse physicians tend to order far more diagnostic tests and treatments than a patient’s condition warrants. This is an incredibly widespread practice. In a 2010 Gallup poll of private-sector physicians, 73 percent admitted to using defensive medicine. While such tactics can reduce the chance of a successful negligence claim against the physician, it adds to the estimated $210 billion the U.S. spends annually on unnecessary care. To put that figure in perspective, that’s more than the entire United Kingdom spends annually on health care. As the U.S. looks to bend the cost curve and make the health care system more sustainable, decreasing the amount of money wasted on unnecessary care will be an important target. While shifting liability away from doctors through reliance on diagnosis and decision-making algorithms won’t eradicate the problem of defensive medicine entirely, it will minimize the impact of one of its main drivers in certain clinical settings.

Artificially intelligent health care tools are no longer in the realm of science fiction. Just last week, the Internet health company HealthTap announced an AI-powered, consumer-facing, diagnostic app that patients can download onto their phones called Dr. AI. Hopefully, as these algorithms proliferate and improve, doctors will begin to rely on their superior accuracy and precision, and the associated decrease in malpractice liability will allow those doctors to forego ordering every conceivable test or treatment. Those pioneering health care artificial intelligence likely did not set out to solve the issue of defensive medicine, but they may very well have stumbled upon a solution.

3 thoughts to “Artificial Intelligence, Medical Malpractice, and the End of Defensive Medicine”

  1. The AI is a good technology but anything which is out of our hand is dangerous at the end. The medical is one of the most crucial field and use of AI must be moderated all the way. Thanks for sharing such a nice topic.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.