Newspapers and Laptop.

AI, Copyright, and Open Science: Health Implications of the New York Times/OpenAI Lawsuit

By Adithi Iyer

The legal world is atwitter with the developing artificial intelligence (“AI”) copyright cage match between The New York Times and OpenAI. The Times filed its complaint in Manhattan Federal District Court on December 27 accusing OpenAI of unlawfully using its (copyrighted and paywalled) articles to train ChatGPT. OpenAI, in turn, published a sharply-worded response on January 8, claiming that its incorporation of the material for training purposes squarely constitutes fair use. This follows ongoing suits by authors against OpenAI on similar grounds, but the titanic scale of the Times-OpenAI dispute and its application of these issues to media in federal litigation makes it one to watch. While much of the buzz around the case has centered on its intellectual property and First Amendment implications, there may be implications for the health and biotech industries. Here’s a rundown of the major legal questions at play and the health-related stakes for a future decision.

Read More

President Joe Biden at desk in Oval Office.

What’s on the Horizon for Health and Biotech with the AI Executive Order

By Adithi Iyer

Last month, President Biden signed an Executive Order mobilizing an all-hands-on-deck approach to the cross-sector regulation of artificial intelligence (AI). One such sector (mentioned, from my search, 33 times) is health/care. This is perhaps unsurprising— the health sector touches almost every other aspect of American life, and of course continues to intersect heavily with technological developments. AI is particularly paradigm-shifting here: the technology already advances existing capabilities in analytics, diagnostics, and treatment development exponentially. This Executive Order is, therefore, as important a development for health care practitioners and researchers as it is for legal experts. Here are some intriguing takeaways:  Read More

Silver Spring, MD, USA - June 25, 2022: The U.S. Department of Health and Human Services (HHS), U.S. Public Health Service (USPHS) and FDA logos are seen at the FDA headquarters, the White Oak Campus.

FDA Solicits Feedback on the Use of AI and Machine Learning in Drug Development

By Matthew Chun

The U.S. Food and Drug Administration (FDA), in fulfilling its task of ensuring that drugs are safe and effective, has recently turned its attention to the growing use of artificial intelligence (AI) and machine learning (ML) in drug development. On May 10, FDA published a discussion paper on this topic and requested feedback “to enhance mutual learning and to establish a dialogue with FDA stakeholders” and to “help inform the regulatory landscape in this area.” In this blog post, I will summarize the main themes of the discussion paper, highlighting areas where FDA seems particularly concerned, and detailing how interested parties can engage with the agency on these issues.

Read More

AI concept art.

A Closer Look at FDA’s Newly Released AI/ML Action Plan

By Vrushab Gowda

The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.

Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.

The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Insufficient Protections for Health Data Privacy: Lessons from Dinerstein v. Google

By Jenna Becker

A data privacy lawsuit against the University of Chicago Medical Center and Google was recently dismissed, demonstrating the difficulty of pursuing claims against hospitals that share patient data with tech companies.

Patient data sharing between health systems and large software companies is becoming increasingly common as these organizations chase the potential of artificial intelligence and machine learning in healthcare. However, many tech firms also own troves of consumer data, and these companies may be able to match up “de-identified” patient records with a patient’s identity.

Scholars, privacy advocates, and lawmakers have argued that HIPAA is inadequate in the current landscape. Dinerstein v. Google is a clear reminder that both HIPAA and contract law are insufficient for handling these types of privacy violations. Patients are left seemingly defenseless against their most personal information being shared without their meaningful consent.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

Understanding Racial Bias in Medical AI Training Data

By Adriana Krasniansky

Interest in artificially intelligent (AI) health care has grown at an astounding pace: the global AI health care market is expected to reach $17.8 billion by 2025 and AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. 

Yet, as researchers across technology and medical fields agree, “AI systems are only as good as the data we put into them.” When AI systems are trained on patient datasets that are incomplete or under/misrepresentative of certain populations, they stand to develop discriminatory biases in their outcomes. In this article, we present three examples that demonstrate the potential for racial bias in medical AI based on training data. Read More

Picture of doctor neck down using an ipad with digital health graphics superimposed

Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More