When AI Turns Miscarriage into Murder: The Alarming Criminalization of Pregnancy in the Digital Age

by Abeer Malik

Imagine: Overjoyed at your pregnancy, you eagerly track every milestone, logging daily habits and symptoms into a pregnancy app. Then tragedy strikes—a miscarriage. Amidst your grief, authorities knock at your door. They’ve been monitoring your digital data and now question your behavior during pregnancy, possibly building a case against you using your own information as evidence.

This dystopian scenario edges closer to reality as artificial intelligence (AI) becomes more embedded in reproductive health care. In a post-Dobbs world where strict fetal personhood laws are gaining traction, AI’s predictive insight into miscarriage or stillbirth are at risk of becoming tools of surveillance, casting suspicion on women who suffer natural pregnancy losses.

The criminalization of pregnancy outcomes is not new, but AI introduces a high-tech dimension to an already chilling trend. At stake is the privacy and the fundamental right of women to make decisions about their own bodies without fearing criminal prosecution. Alarmingly, the law is woefully unprepared for this technological intrusion.

AI Meets Fetal Personhood Laws: A Recipe for Criminalization

Fetal personhood laws, in effect in at least 38 states, grant legal rights to fetuses, sometimes even surpassing those of the mother’s. Initially intended to protect pregnant women from violence, these laws are now being used to prosecute women for adverse pregnancy outcomes. The case of Brittney Poolawa — a young Native American woman from Oklahoma — illustrates the risk: after miscarrying at 17 weeks, Poolaw was sentenced to four years in prison for manslaughter. Prosecutors alleged drug use caused the miscarriage despite medical evidence of multiple possible causes. Poolaw’s story is no anomaly but a sobering warning about the consequences of criminalizing pregnancy loss in states with fetal personhood laws.

Now, consider AI health apps entering the equation. BobiHealth, for instance, uses machine learning to predict pregnancy risks by analyzing users’ vital signs, diet, and lifestyle factors. If flagged as “high-risk,” could a woman’s behavior during pregnancy be grounds for a prosecutor to argue negligence or intent should a miscarriage occur? Under strict fetal personhood laws, this might be a terrifying possibility. AI’s risk scores, when interpreted as definitive evidence of intentional harm to the fetus, could transform a personal tragedy into a public legal ordeal, amplifying the consequences of these controversial laws.

Predictive Policing of the Womb

AI’s use in predicting pregnancy extends the controversial concept of predictive policing — using data to anticipate criminal behavior — into reproductive health. Even routine behaviors like drinking coffee or commuting through a stressful neighbourhood could trigger AI-generated “risk alerts,” which could be used under fetal personhood laws to claims that a woman knowingly endangered her pregnancy.

Predictive policing in criminal law has long been criticized for biases and inaccuracies, and applying it to reproductive health is equally problematic. AI algorithms trained on biased data may disproportionately target marginalized women, particularly women of color and those from low-income communities. The veneer of scientific legitimacy that AI lends to these predictions could easily turn natural or unavoidable pregnancy losses into criminal matters — creating an Orwellian world where pregnancy is constantly surveilled, and personal autonomy is compromised.

The Legal Vacuum: U.S. Law Lags Behind AI’s Reach

The U.S. legal framework surrounding AI and data privacy is alarmingly thin. The Health Insurance Portability and Accountability Act (HIPAA) protects medical records within health care systems but does not include data collected by consumer apps or wearable devices, such as period-tracking or pregnancy-monitoring tools. This gap leaves private, sensitive information vulnerable to third-party misuse. A 2023 Privacy International report revealed that numerous pregnancy-tracking apps routinely share user data with advertisers and data brokers, making health information a commodity in an unregulated marketplace, ripe for exploitation. Senator Ron Wyden of Oregon recently disclosed that data broker Near Intelligence allegedly sold data on visits to nearly 600 Planned Parenthood locations across 48 states without consent, which was later used for targeting anti-abortion campaigns.

Meanwhile, the U.S. also lacks targeted legislation regulating AI in health care settings. While agencies like the U.S. Food and Drug Administration (FDA) have issued guidance on AI as a medical device, these typically focus on efficacy and safety in traditional medical settings, rather than patients’ use of health technologies and apps.

Though Congress attempted to pass a federal law addressing privacy gaps — such as the 2022 American Data Privacy Protection Act — the bill ultimately stalled amid tech lobbying and political gridlock. This year, a new proposal, the American Privacy Rights Act (APRA), reintroduced federal data privacy protections with updated provisions. APRA’s latest draft proposes baseline standards for data privacy, including strict limits on data collection and requirements for explicit consent for sensitive, health-related data. However, the bill faces heavy opposition and remains under debate, leaving pregnant women exposed to potential digital surveillance and criminalization.

By contrast, the European Union’s General Data Protection Regulation (GDPR) sets a strong precedent. Under GDPR, companies must obtain explicit consent to collect health data, and individuals have the right to control their information. With strict penalties for non-compliance, GDPR incentivizes companies to adhere to high privacy standards, providing a model that the U.S. could benefit from adopting.

Defending Women’s Autonomy in the Digital Age: The Way Forward

Protecting reproductive autonomy and privacy requires urgent action on several fronts. Federal laws like APRA should be enacted to address the misuse of sensitive health information. This includes covering all health data — including data collected by apps — and imposing strict consent and data-sharing regulations to curb potential exploitation. Similarly, HIPAA should be expanded to cover health data collected by consumer apps, ensuring that sensitive pregnancy data is protected regardless of where it is collected. Additionally, the FDA and related agencies should implement specific guidelines for AI-driven health tools used in consumer settings, ensuring that AI-generated health predictions are transparent, scientifically sound, and restricted in their potential criminal use cases. Finally, advocacy efforts should focus on prioritizing medical expertise in matters related to reproductive health care and challenging fetal personhood laws that create precedents for prosecuting pregnancy outcomes as criminal acts.

The case of Brittney Poolaw and countless other women highlights the stakes when the law prioritizes fetal rights over women’s autonomy. As Justice Louis Brandeis famously said, “Sunlight is said to be the best of disinfectants.” Bringing these issues to light is the first step in combating them. We must push for rigorous federal privacy protections, hold tech companies accountable, and ensure that the digital age does not descend into a dystopian era of state surveillance. The choices we make now will shape not only the future of reproductive rights but the very fabric of our digital society.

 

Abeer Malik’s (LL.M. 2025) research interests include medical law, law and technology, and corporate law. Her research project will examine the legal and ethical implications of AI’s integration into precision medicine (PM), focusing on the distinct challenges AI introduces compared to general healthcare.

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.