When AI Turns Miscarriage into Murder: The Alarming Criminalization of Pregnancy in the Digital Age

by Abeer Malik

Imagine: Overjoyed at your pregnancy, you eagerly track every milestone, logging daily habits and symptoms into a pregnancy app. Then tragedy strikes—a miscarriage. Amidst your grief, authorities knock at your door. They’ve been monitoring your digital data and now question your behavior during pregnancy, possibly building a case against you using your own information as evidence.

This dystopian scenario edges closer to reality as artificial intelligence (AI) becomes more embedded in reproductive health care. In a post-Dobbs world where strict fetal personhood laws are gaining traction, AI’s predictive insight into miscarriage or stillbirth are at risk of becoming tools of surveillance, casting suspicion on women who suffer natural pregnancy losses.

The criminalization of pregnancy outcomes is not new, but AI introduces a high-tech dimension to an already chilling trend. At stake is the privacy and the fundamental right of women to make decisions about their own bodies without fearing criminal prosecution. Alarmingly, the law is woefully unprepared for this technological intrusion.

Read More

Artificial Intelligence Plus Data Democratization Requires New Health Care Framework

By Michael L. Millenson

The latest draft government strategic plan for health information technology pledges to support health information sharing among individuals, health care providers and others “so that they can make informed decisions and create better health outcomes.”

Those good intentions notwithstanding, the current health data landscape is dramatically different from when the organizational author of the plan, the Office of the National Coordinator for Health IT, formed two decades ago. As Price and Cohen have pointed out, entities subject to federal Health Insurance Portability and Accountability Act (HIPAA) requirements represent just the tip of the informational iceberg. Looming larger are health information generated by non-HIPAA-covered entities, user-generated health information, and non-health information being used to generate inferences about treatment and health improvement.

Read More

Businessman's hands typing on laptop keyboard in morning light

Sorry, You Probably Cannot Get MDMA Through Telehealth

By Vincent Joralemon

The U.S. Food and Drug Administration’s recent acceptance of an MDMA-assisted therapy New Drug Application has experts buzzing over expanded access to the infamous substance commonly known as “ecstasy” or “molly.” 

Yet, once approved, FDA will put limits on the approved drug. If past psychedelics are any indication, this means that MDMA will probably need to be provided in a clinic under certain protocols. This means patients will need to wait for other MDMA products to complete clinical trials before we’ll see at-home, private use of the drug.

Read More

Doctor working with modern computer interface.

Thank Ketamine for the Telehealth Extension

By Vincent Joralemon

In my last post, I discussed the rise of psychedelic lobbying — how companies with vested economic interests in psychedelics have applied pressure to shape regulations that favor their business models.

One such initiative — the ketamine therapy industry’s push to extend the COVID-era telemedicine flexibilities for prescriptions of controlled substances — highlights how sophisticated these campaigns can be, and how their impact stretches beyond the psychedelic industry.

Read More

Code on computer.

Defragmenting European Law on Medical AI

By Audrey Lebret

In the medical field, artificial intelligence (AI) is of great operational and clinical use. It eases the administrative burden on doctors, helps in the allocation of healthcare resources, and improves the quality of diagnosis. It also raises numerous challenges and risks. Balancing competitiveness with the need for risk prevention, Europe aims to become a major digital player through its AI framework strategy, particularly in the field of digital health. The following provides a rapid overview of the normative landscape of medical AI in Europe, beyond the borders of the EU and its 27 Member States. It also takes into account the treaties in force or emerging at the level of the Council of Europe and its 46 Member States. The purpose is to illustrate the reasons and difficulties associated with the legal fragmentation in the field, and to briefly mention a few key elements towards the necessary defragmentation.

Read More

Silver Spring, MD, USA - June 25, 2022: The U.S. Department of Health and Human Services (HHS), U.S. Public Health Service (USPHS) and FDA logos are seen at the FDA headquarters, the White Oak Campus.

FDA Solicits Feedback on the Use of AI and Machine Learning in Drug Development

By Matthew Chun

The U.S. Food and Drug Administration (FDA), in fulfilling its task of ensuring that drugs are safe and effective, has recently turned its attention to the growing use of artificial intelligence (AI) and machine learning (ML) in drug development. On May 10, FDA published a discussion paper on this topic and requested feedback “to enhance mutual learning and to establish a dialogue with FDA stakeholders” and to “help inform the regulatory landscape in this area.” In this blog post, I will summarize the main themes of the discussion paper, highlighting areas where FDA seems particularly concerned, and detailing how interested parties can engage with the agency on these issues.

Read More

AI-generated image of robot doctor with surgical mask on.

Who’s Liable for Bad Medical Advice in the Age of ChatGPT?

By Matthew Chun

By now, everyone’s heard of ChatGPT — an artificial intelligence (AI) system by OpenAI that has captivated the world with its ability to process and generate humanlike text in various domains. In the field of medicine, ChatGPT already has been reported to ace the U.S. medical licensing exam, diagnose illnesses, and even outshine human doctors on measures of perceived empathy, raising many questions about how AI will reshape health care as we know it.

But what happens when AI gets things wrong? What are the risks of using generative AI systems like ChatGPT in medical practice, and who is ultimately held responsible for patient harm? This blog post will examine the liability risks for health care providers and AI providers alike as ChatGPT and similar AI models increasingly are used for medical applications.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Governing Health Data for Research, Development, and Innovation: The Missteps of the European Health Data Space Proposal

By Enrique Santamaría

Together with the Data Governance Act (DGA) and the General Data Protection Regulation (GDPR), the proposal for a Regulation on the European Health Data Space (EHDS) will most likely form the new regulatory and governance framework for the use of health data in the European Union. Although well intentioned and thoroughly needed, there are aspects of the EHDS that require further debate, reconsiderations, and amendments. Clarity about what constitutes scientific research is particularly needed.

Read More