Recently, I wrote about the rise of artificial intelligence in medical decision-making and its potential impacts on medical malpractice. I posited that, by decreasing the degree of discretion physicians exercise in diagnosis and treatment, medical algorithms could reduce the viability of negligence claims against health care providers.
It’s easy to see why artificial intelligence could impact the ways in which medical malpractice traditionally applies to physician decision-making, but it’s unclear who should be responsible when a patient is hurt by a medical decision made with an algorithm. Should the companies that create these algorithms be liable? They did, after all, produce the product that led to the patient’s injury. While intuitively appealing, traditional means of holding companies liable for their products may not fit the medical algorithm context very well.
Traditional products liability doctrine applies strict liability to most consumer products. If a can of soda explodes and injures someone, the company that produced it is liable, even if it didn’t do anything wrong in the manufacturing or distribution processes. Strict liability works well for most consumer products, but would likely prove too burdensome for medical algorithms. This is because medical algorithms are inherently imperfect. No matter how good the algorithm is — or how much better it is than a human physician — it will occasionally be wrong. Even the best algorithms will give rise to potentially substantial liability some percentage of the time under a strict liability regime.
To take a concrete example, Stanford researchers have developed an algorithm that can diagnose melanomas as well as or better than expert dermatologists. As this algorithm improves, it could become an important part of identifying melanomas as early and often as possible. However, the accuracy rate of the algorithm is less than 75%. Thus, despite being a potential improvement over human physicians, the algorithm could expose its creators to liability for misdiagnosing a patient one out of every four times it’s used. Companies would simply stop developing these algorithms if they were subject to such substantial, sure-fire liability. Patients would end up losing out, because we would never fully realize the immense potential of machine-learning algorithms and artificial intelligence to improve our diagnostic and treatment capabilities.
There exists an alternative regime for so-called “unavoidably unsafe” products, see Restatement, Second, of Torts § 402A, comment k (1965), such as pharmaceuticals, which attempts to circumvent the complications of strict liability for products with error rates or side effects. This avenue is preferable to strict liability in the context of pharmaceuticals because it’s impossible to make drugs that will never lead to adverse consequences. There will always be an allergic reaction or negative side effect somewhere, just as in the algorithmic context there will always be errors a certain percentage of the time. Because of the inherent danger of pharmaceuticals, most drug product liability regimes rely on claims for failure to test or failure to warm — arguing that either the pharmaceutical company didn’t perform adequate testing to establish the safety and efficacy of the drug or that it neglected to include adequate warnings regarding possible negative effects on the drug label.
While the comparison to pharmaceuticals seems apt, tort claims for failure to warn or failure to test may not translate well to the artificial intelligence algorithm context. These algorithms will likely be trained on hundreds of thousands — if not millions — of pieces of data, and they would constantly be tested and refined to enhance their accuracy even after deployment in hospitals. As a result, failure to test claims likely wouldn’t get much traction in court. Failure to warn claims may not fair much better. Because of the extensive testing algorithms would go through before deployment, their exact complications and error rates would be known with an unprecedented degree of precision — so the probability that a potential error would be unknown or undisclosed is quite small. Accordingly, the avenues for patients to seek compensation for “unavoidably unsafe” products, much like those under malpractice or traditional products liability doctrine, fail to fit well with artificially intelligent medical algorithms.
So what do we do for patients who are harmed by medical algorithms? Is there another remedy with which we can provide them? One possible solution might be found in the compensation regime Congress carved out for another unique medical product that failed to comport with traditional tort remedies available to patients: vaccines.
Vaccines share many of the characteristics that make AI algorithms unfit for traditional buckets of liability designed to make patients whole after suffering adverse consequences — they are important for disease prevention, but they are also inherently imperfect and have many well-documented side effects. Furthermore, the profit margin on their production is so small, that putting liability for harms on the manufacturers would cause many companies to stop producing them altogether, which would be a hugely problematic from a public health perspective.
As a result, Congress created the National Vaccine Injury Compensation Program under the National Childhood Vaccine Injury Act of 1986. See 42 U.S.C. § 300aa-1, et seq. (2012). The program is a fund powered by a small tax on every vaccine administered, and it is used to compensate patients harmed by vaccines without the need to find liability or assign guilt. Patients that have been harmed simply report their injury to the government for review, and based on the harm and the responsible vaccine the government provides qualifying patients with compensation. Companies and physicians don’t have to worry about liability stemming from the unavoidable — albeit unlikely — risk posed by vaccines, and patients can still be compensated for injuries. Applying this system to compensating patients injured by artificial intelligence or machine-learning algorithms could help sidestep the thorny liability questions, providing a remedy for patients without trying to force algorithmic medicine into one of the existing tort boxes. Every time an algorithm is used in the course of treatment, a small percentage of the cost of that medical encounter could go into a fund which could be used to make injured patients whole without finding the doctors or the algorithm companies liable for the injury.
To be sure, there would be some administrative hurdles to overcome in setting up such a system. One reason the vaccine program works so well is that the negative outcomes from vaccines have been well documented and catalogued in a comprehensive table that stipulates the various complications that can give rise to a claim. Creating a similar table for algorithmic complications would be more difficult because the adverse events resulting from AI algorithms will change as algorithms evolve, but it would not be impossible. The table would simply have to be slightly more adaptive than the one currently used for vaccines, and the program would have to allow for a means of compensation for new complications before they’ve been added to the table.
The problem of determining liability for injuries to patients when artificially intelligent medical algorithms make mistakes exists primarily in the realm of speculation for the moment. But as algorithms improve and physicians incorporate them more into their practice, injured patients may begin to find themselves without a viable legal remedy. If Congress wants to ensure that patients aren’t left without compensation for injuries sustained through algorithmic medicine, the National Vaccine Injury Compensation Program may serve as a model for a potential solution. There will be complications arising from algorithmic medicine — no matter how good it is — and we shouldn’t allow the affected patients to fall through the cracks in the tort system.