Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

hands hold phone with app heart and activity on screen over table in office

Perspectives on Data Privacy for Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Direct-to-consumer (DTC) health apps, such as apps that manage our diet, fitness, and sleep, are becoming ubiquitous in our digital world.

These apps provide a window into some of the key issues in the world of digital health — including data privacy, data access, data ownership, bias, and the regulation of health technology.

To better understand these issues, and ways forward, we contacted key stakeholders representing a range of perspectives in the field of digital health for their brief answers to five questions about DTC health apps.

Read More

Miami Downtown, FL, USA - MAY 31, 2020: Woman leading a group of demonstrators on road protesting for human rights and against racism.

Intentional Commitments to Diversity, Equity, Inclusion Needed in Health Care

By Eloho E. Akpovi

“They told me my baby was going to die.” Those words have sat with me since my acting internship in OB/GYN last summer. They were spoken by a young, Black, pregnant patient presenting to the emergency room to rule out preeclampsia.

As a Black woman and a medical student, those words were chilling. They reflect a health care system that is not built to provide the best care for Black patients and trains health care professionals in a way that is tone-deaf to racism and its manifestations in patient care.

Read More

Society or population, social diversity. Flat cartoon vector illustration.

Bias, Fairness, and Deep Phenotyping

By Nicole Martinez

Deep phenotyping research has the potential to improve understandings of social and structural factors that contribute to psychiatric illness, allowing for more effective approaches to address inequities that impact mental health.

But, in order to build upon the promise of deep phenotyping and minimize the potential for bias and discrimination, it will be important to incorporate the perspectives of diverse communities and stakeholders in the development and implementation of research projects.

Read More

White jigsaw puzzle as a human brain on blue. Concept for Alzheimer's disease.

Detecting Dementia

Cross-posted, with slight modification, from Harvard Law Today, where it originally appeared on November 21, 2020. 

By Chloe Reichel

Experts gathered last month to discuss the ethical, social, and legal implications of technological advancements that facilitate the early detection of dementia.

“Detecting Dementia: Technology, Access, and the Law,” was hosted on Nov. 16 as part of the Project on Law and Applied Neuroscience, a collaboration between the Center for Law, Brain and Behavior at Massachusetts General Hospital and the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

The event, organized by Francis X. Shen ’06 Ph.D. ’08, the Petrie-Flom Center’s senior fellow in Law and Applied Neuroscience and executive director of the Center for Law, Brain and Behavior at Massachusetts General Hospital, was one of a series hosted by the Project on Law and Applied Neuroscience on aging brains.

Early detection of dementia is a hopeful prospect for the treatment of patients, both because it may facilitate early medical intervention, as well as more robust advance care planning.

Read More

hospital equipment, including heart rate monitor and oxygen monitor functioning at bedside.

Why COVID-19 is a Chronic Health Concern for the US

By Daniel Aaron

The U.S. government has ratified a record-breaking $2 trillion stimulus package just as it has soared past 100,000 coronavirus cases and 1,500 deaths (as of March 27). The U.S. now has the most cases of any country—this despite undercounting due to continuing problems in testing Americans on account of various scientific and policy failures.

Coronavirus has scared Americans. Public health officials and physicians are urging people to stay at home because this disease kills. Many have invoked the language of war, implying a temporary battle against a foreign foe. This framing, though it may galvanize quick support, disregards our own systematic policy failures to prevent, test, and trace coronavirus, and the more general need to solve important policy problems.

Coronavirus is an acute problem at the individual level, but nationally it represents a chronic concern. No doubt, developing innovative ways to increase the number of ventilators, recruit health care workers, and improve hospital capacity will save lives in the short-term — despite mixed messages from the federal government. But a long-term perspective is needed to address the serious problems underlying our country’s systemic failures across public health.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More