Businessman's hands typing on laptop keyboard in morning light

Sorry, You Probably Cannot Get MDMA Through Telehealth

By Vincent Joralemon

The U.S. Food and Drug Administration’s recent acceptance of an MDMA-assisted therapy New Drug Application has experts buzzing over expanded access to the infamous substance commonly known as “ecstasy” or “molly.” 

Yet, once approved, FDA will put limits on the approved drug. If past psychedelics are any indication, this means that MDMA will probably need to be provided in a clinic under certain protocols. This means patients will need to wait for other MDMA products to complete clinical trials before we’ll see at-home, private use of the drug.

Read More

Doctor working with modern computer interface.

Thank Ketamine for the Telehealth Extension

By Vincent Joralemon

In my last post, I discussed the rise of psychedelic lobbying — how companies with vested economic interests in psychedelics have applied pressure to shape regulations that favor their business models.

One such initiative — the ketamine therapy industry’s push to extend the COVID-era telemedicine flexibilities for prescriptions of controlled substances — highlights how sophisticated these campaigns can be, and how their impact stretches beyond the psychedelic industry.

Read More

Code on computer.

Defragmenting European Law on Medical AI

By Audrey Lebret

In the medical field, artificial intelligence (AI) is of great operational and clinical use. It eases the administrative burden on doctors, helps in the allocation of healthcare resources, and improves the quality of diagnosis. It also raises numerous challenges and risks. Balancing competitiveness with the need for risk prevention, Europe aims to become a major digital player through its AI framework strategy, particularly in the field of digital health. The following provides a rapid overview of the normative landscape of medical AI in Europe, beyond the borders of the EU and its 27 Member States. It also takes into account the treaties in force or emerging at the level of the Council of Europe and its 46 Member States. The purpose is to illustrate the reasons and difficulties associated with the legal fragmentation in the field, and to briefly mention a few key elements towards the necessary defragmentation.

Read More

Silver Spring, MD, USA - June 25, 2022: The U.S. Department of Health and Human Services (HHS), U.S. Public Health Service (USPHS) and FDA logos are seen at the FDA headquarters, the White Oak Campus.

FDA Solicits Feedback on the Use of AI and Machine Learning in Drug Development

By Matthew Chun

The U.S. Food and Drug Administration (FDA), in fulfilling its task of ensuring that drugs are safe and effective, has recently turned its attention to the growing use of artificial intelligence (AI) and machine learning (ML) in drug development. On May 10, FDA published a discussion paper on this topic and requested feedback “to enhance mutual learning and to establish a dialogue with FDA stakeholders” and to “help inform the regulatory landscape in this area.” In this blog post, I will summarize the main themes of the discussion paper, highlighting areas where FDA seems particularly concerned, and detailing how interested parties can engage with the agency on these issues.

Read More

AI-generated image of robot doctor with surgical mask on.

Who’s Liable for Bad Medical Advice in the Age of ChatGPT?

By Matthew Chun

By now, everyone’s heard of ChatGPT — an artificial intelligence (AI) system by OpenAI that has captivated the world with its ability to process and generate humanlike text in various domains. In the field of medicine, ChatGPT already has been reported to ace the U.S. medical licensing exam, diagnose illnesses, and even outshine human doctors on measures of perceived empathy, raising many questions about how AI will reshape health care as we know it.

But what happens when AI gets things wrong? What are the risks of using generative AI systems like ChatGPT in medical practice, and who is ultimately held responsible for patient harm? This blog post will examine the liability risks for health care providers and AI providers alike as ChatGPT and similar AI models increasingly are used for medical applications.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Governing Health Data for Research, Development, and Innovation: The Missteps of the European Health Data Space Proposal

By Enrique Santamaría

Together with the Data Governance Act (DGA) and the General Data Protection Regulation (GDPR), the proposal for a Regulation on the European Health Data Space (EHDS) will most likely form the new regulatory and governance framework for the use of health data in the European Union. Although well intentioned and thoroughly needed, there are aspects of the EHDS that require further debate, reconsiderations, and amendments. Clarity about what constitutes scientific research is particularly needed.

Read More

Adderall bottle on shelf.

Losing Control of Controlled Substances? The Case of Telehealth Prescriptions 

By Minsoo Kwon

Telehealth services that specialize in the treatment of mental health concerns, such as Cerebral Inc., highlight the ongoing challenge of appropriately balancing accessibility of care with patient safety.

While increased accessibility of mental health care services through telehealth is a valuable goal, if our aim is the well-being of patients, safety must be paramount.

Read More