Newspapers and Laptop.

AI, Copyright, and Open Science: Health Implications of the New York Times/OpenAI Lawsuit

By Adithi Iyer

The legal world is atwitter with the developing artificial intelligence (“AI”) copyright cage match between The New York Times and OpenAI. The Times filed its complaint in Manhattan Federal District Court on December 27 accusing OpenAI of unlawfully using its (copyrighted and paywalled) articles to train ChatGPT. OpenAI, in turn, published a sharply-worded response on January 8, claiming that its incorporation of the material for training purposes squarely constitutes fair use. This follows ongoing suits by authors against OpenAI on similar grounds, but the titanic scale of the Times-OpenAI dispute and its application of these issues to media in federal litigation makes it one to watch. While much of the buzz around the case has centered on its intellectual property and First Amendment implications, there may be implications for the health and biotech industries. Here’s a rundown of the major legal questions at play and the health-related stakes for a future decision.

Read More

President Joe Biden at desk in Oval Office.

What’s on the Horizon for Health and Biotech with the AI Executive Order

By Adithi Iyer

Last month, President Biden signed an Executive Order mobilizing an all-hands-on-deck approach to the cross-sector regulation of artificial intelligence (AI). One such sector (mentioned, from my search, 33 times) is health/care. This is perhaps unsurprising— the health sector touches almost every other aspect of American life, and of course continues to intersect heavily with technological developments. AI is particularly paradigm-shifting here: the technology already advances existing capabilities in analytics, diagnostics, and treatment development exponentially. This Executive Order is, therefore, as important a development for health care practitioners and researchers as it is for legal experts. Here are some intriguing takeaways:  Read More

Silver Spring, MD, USA - June 25, 2022: The U.S. Department of Health and Human Services (HHS), U.S. Public Health Service (USPHS) and FDA logos are seen at the FDA headquarters, the White Oak Campus.

FDA Solicits Feedback on the Use of AI and Machine Learning in Drug Development

By Matthew Chun

The U.S. Food and Drug Administration (FDA), in fulfilling its task of ensuring that drugs are safe and effective, has recently turned its attention to the growing use of artificial intelligence (AI) and machine learning (ML) in drug development. On May 10, FDA published a discussion paper on this topic and requested feedback “to enhance mutual learning and to establish a dialogue with FDA stakeholders” and to “help inform the regulatory landscape in this area.” In this blog post, I will summarize the main themes of the discussion paper, highlighting areas where FDA seems particularly concerned, and detailing how interested parties can engage with the agency on these issues.

Read More

AI-generated image of robot doctor with surgical mask on.

Who’s Liable for Bad Medical Advice in the Age of ChatGPT?

By Matthew Chun

By now, everyone’s heard of ChatGPT — an artificial intelligence (AI) system by OpenAI that has captivated the world with its ability to process and generate humanlike text in various domains. In the field of medicine, ChatGPT already has been reported to ace the U.S. medical licensing exam, diagnose illnesses, and even outshine human doctors on measures of perceived empathy, raising many questions about how AI will reshape health care as we know it.

But what happens when AI gets things wrong? What are the risks of using generative AI systems like ChatGPT in medical practice, and who is ultimately held responsible for patient harm? This blog post will examine the liability risks for health care providers and AI providers alike as ChatGPT and similar AI models increasingly are used for medical applications.

Read More

Wooman doctor using tablet with creative glowing digital heart futuristic interface hologram. Medicine, cardiology and future concept.

The Council of Europe’s Artificial Intelligence Convention: Implications for Health and Patients

By Hannah van Kolfschooten

The Council of Europe, the most important international human rights organization on the European continent, currently is drafting a Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (AI Convention). The Convention aims to protect fundamental rights against the harms of Artificial Intelligence (AI), and is expected to become a global leading convention, as non-European states such as the United States (U.S.) are considering becoming signatories.

As health care is one of the top industries for AI, the forthcoming AI Convention will have important implications for the protection of health and patients. This post gives a brief outline of the background, scope, and purpose of the AI Convention. It goes on to flag common human rights issues associated with medical AI and then touches upon the most important health rights implications of the current text of the AI Convention.

Read More

Modern Medical Research Laboratory with Computer Showing Virus Genome Research Software. Scientific Laboratory Biotechnology Development Center Full of High-Tech Equipment.

How Artificial Intelligence is Revolutionizing Drug Discovery

By Matthew Chun

In recent months, generative artificial intelligence (AI) has taken the world by storm. AI systems like ChatGPT and Stable Diffusion have captured the imagination of the masses with their impressive and sometimes controversial ability to generate human-like text and artwork. However, it may come as a surprise to some that — in addition to writing Twitter threads and dating app messages — AI is also well underway in revolutionizing the discovery of life-saving drugs.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More