Medical Errors – The Third Leading Cause of Death in the US

Source: https://i0.wp.com/www.lawbbg.com/images/top-banner/medical-malpractice.jpg?resize=353%2C195&ssl=1By Matthew Young

John James, PhD, became involved in the movement to bring greater attention to patient safety and rampant medical errors by way of tragedy. In 2002, Dr. James lost his 19-year-old son as a result of problematic care provided by cardiologists at a hospital in central Texas. A toxicologist by training, Dr. James taught himself cardiology in order to piece together the events that led to the death of his son despite an extensive evaluation by a team of cardiologists. His journey is chronicled in his book, “A Sea of Broken Hearts: Patient Rights in a Dangerous, Profit-Driven Health Care System.” From there, Dr. James became an advocate for patient safety and a crusader against medical errors. His website is called Patient Safety America.

Major media outlets around the globe extensively covered the recent British Medical Journal article showing that medical errors are the third leading cause of death in the US.  In 2013, Dr. James published a related paper in the Journal of Patient Safety that showed how nearly 440,000 lives per year are lost to medical errors in the American healthcare system.

I wanted to provide Bill of Health readers with a summary of how Dr. James’s paper in many ways pre-saged and perhaps even exceeds the recent BMJ article. A KevinMD article provides further context in this debate.

Drs. Makary and Daniel in their British Medical Journal (BMJ) paper entitled “Medical error – the third leading cause of death in the US” have drawn additional attention to the extensive problem of lethal medical errors. They emphasize that these errors are not captured in death certificates. Their core calculation finds that 251,454 deaths result from medical errors in U.S. hospitals each year.

While the studies Drs. Makary and Daniel used were precisely those used in Dr. James’s Journal of Patient Safety (JPS) paper, they did not use results from a pilot study that predated the 2010 study from the U.S. Office of Inspector General. The BMJ and JPS analyses were both performed on data from 2002 to 2008. The Dr. James’s JPS study employed a base year of 2007 in which there were 34.4 million hospital admissions because that year is within the bounds of the records studied. That may be a better choice than the BMJ author’s choice of 2013, which is well outside the years of the medical records reviewed by them. If the BMJ study had used 2007 as its base-year, then the result would have been 34.4 X 251,000/37 = 233,000. It is reasonable to suppose that there may have been substantial changes in the rate of medical errors from 2007 to 2013, hopefully a decline (for example, the US Centers for Disease Control reports substantial reductions in many types of hospital acquired infections over that period. The Global trigger tool used to capture adverse events includes hospital acquired infections.

The BMJ paper in table 1 employed a simple average when aggregating the various data sources. However, because the number of medical records examined in the North Carolina study is 3-fold higher than in the other two (4), perhaps a better way is to use a weighted average by adding up the total patient admissions and total number of deaths due to adverse events, and then applying the average of the preventability factors (69%). This is the approach used in Dr. James’s paper. By using a weighted average of the three BMJ estimates and applying a factor of 3 to the North Carolina study and 1 to the other two smaller studies, then the core estimate drops from 251, 000 to 205,000. The BMJ study in contrast gives equal weighting to studies involving vastly different numbers of records.

All the authors believe their estimates to be underestimates. Dr. James cites 3 reasons for this: firstly, the Global Trigger Tool, which was the primary adverse-event finder in all three studies, misses many errors of omission, communication and context; secondly, the Trigger Tool also misses errors that are not evident in the medical records; and finally, the Trigger Tool does not detect many diagnostic errors. A study by Weissman, et al. provides insight into the magnitude of underestimattion. Studying medical records of 1000 hospitalized cardiac patients, they found that patient reports of serious preventable harm, which were verified by the research team, were three-fold higher than discovered by physician review of the medical records. Dr. James’s JPS paper used a factor of only 2 to deal with all the missed adverse events except diagnostic errors. According to Dr. James, from the literature estimating that 40,000 to 80,000 die from missed diagnoses each year, the addition of 20,000 deaths from diagnostic errors in hospitals each year seems reasonable. If anything, these adjustments on the core estimate of 210,000 were probably low. Dr. James’s published final estimate was 210,000 x 2 + 20,000 = 440,000 preventable adverse events that contribute to patient death due to preventable adverse events each year.

Additionally, Dr. James reminds us that many of the deaths resulting from non-evidence-based care in hospitals do not occur while the patient is hospitalized. The adverse event that shortens life may occur long after discharge and go unrecognized. A classic example of this a few years before the time frame of the medical records of the studies showed that many were dying prematurely of heart failure because they were not receiving beta-blockers after a myocardial infarction. It seems that at last in 2007, nearly all patients that needed beta blockers were finally getting them. The seminal study on the value of beta blockers was published in 1982 in JAMA, yet as late as the early 2000s, tens of thousands of people with heart failure were dying prematurely each year, presumably because many outside hospitals that could have given the life-prolonging drug, if only they had. Such patients did die of heart failure, but they died earlier than they would have if they had been prescribed a beta blocker. A medical error of omission contributed to their death. These errors are hard to capture.

Another wrinkle Dr. James’s offers is that it may be misleading, once medical errors are acknowledged, to present causes of death as independent events, which is what the BMJ paper suggests in Table 2. One can speculate with much certainty that medical errors contributed a premature death to many of those who died of heart disease or cancer. A more appropriate way to express the national impact of medical errors is to note that about 2.4 million Americans die each year; roughly 1/6th of those deaths (400,000) are hastened by preventable mistakes originating in hospitals. Of course, one has to contend with the debate about what constitutes a “hastened” death, which is another wrinkle within a wrinkle.

In sum, the study by Drs. Makary and Daniel has drawn valuable, additional attention to the problem of medical error as the third leading cause of death in the US, and Dr. James’s analyses do not substantially alter that figure. In many ways, he came up with it first. Whatever way we decide to make these calculations, we must acknowledge that the analyses of the limited data we have now must be performed with careful attention to optimizing the analytical approach, that the conclusions from those analyses must reflect the reality that people often die of more than one cause, and that there must be a national consensus on what constitutes a preventable adverse event, or harmful medical error. Many medical errors do not cause harm. Hopefully, a consensus definition emerges so that we can finally begin to count these errors with more certainty and hopefully track their decline.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.