Hand holding glass ball with inverted image of surroundings reflected in ball.

Flipping the Script: Adoption and Reproductive Justice

By Kimberly McKee

Adoption is a reproductive justice issue. Pretending otherwise ignores how adoption is used as a red herring in anti-abortion arguments. A recent invocation of this faulty logic occurred in Justice Amy Coney Barrett’s questions during the November 2021 oral arguments in Dobbs v. Jackson Women’s Health Organization. Coney Barrett’s statements implied that the option to relinquish infants vis-à-vis adoption rendered abortion availability unnecessary. This line of thinking is one with which I am familiar, as both a Korean international, transracial adoptee, and a critical adoption studies scholar. 

Read More

Society or population, social diversity. Flat cartoon vector illustration.

Bias, Fairness, and Deep Phenotyping

By Nicole Martinez

Deep phenotyping research has the potential to improve understandings of social and structural factors that contribute to psychiatric illness, allowing for more effective approaches to address inequities that impact mental health.

But, in order to build upon the promise of deep phenotyping and minimize the potential for bias and discrimination, it will be important to incorporate the perspectives of diverse communities and stakeholders in the development and implementation of research projects.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More