hospital equipment, including heart rate monitor and oxygen monitor functioning at bedside.

Why COVID-19 is a Chronic Health Concern for the US

By Daniel Aaron

The U.S. government has ratified a record-breaking $2 trillion stimulus package just as it has soared past 100,000 coronavirus cases and 1,500 deaths (as of March 27). The U.S. now has the most cases of any country—this despite undercounting due to continuing problems in testing Americans on account of various scientific and policy failures.

Coronavirus has scared Americans. Public health officials and physicians are urging people to stay at home because this disease kills. Many have invoked the language of war, implying a temporary battle against a foreign foe. This framing, though it may galvanize quick support, disregards our own systematic policy failures to prevent, test, and trace coronavirus, and the more general need to solve important policy problems.

Coronavirus is an acute problem at the individual level, but nationally it represents a chronic concern. No doubt, developing innovative ways to increase the number of ventilators, recruit health care workers, and improve hospital capacity will save lives in the short-term — despite mixed messages from the federal government. But a long-term perspective is needed to address the serious problems underlying our country’s systemic failures across public health.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More