Newspapers and Laptop.

AI, Copyright, and Open Science: Health Implications of the New York Times/OpenAI Lawsuit

By Adithi Iyer

The legal world is atwitter with the developing artificial intelligence (“AI”) copyright cage match between The New York Times and OpenAI. The Times filed its complaint in Manhattan Federal District Court on December 27 accusing OpenAI of unlawfully using its (copyrighted and paywalled) articles to train ChatGPT. OpenAI, in turn, published a sharply-worded response on January 8, claiming that its incorporation of the material for training purposes squarely constitutes fair use. This follows ongoing suits by authors against OpenAI on similar grounds, but the titanic scale of the Times-OpenAI dispute and its application of these issues to media in federal litigation makes it one to watch. While much of the buzz around the case has centered on its intellectual property and First Amendment implications, there may be implications for the health and biotech industries. Here’s a rundown of the major legal questions at play and the health-related stakes for a future decision.

Read More

golden ring on a log.

President Joe Biden, the AI Wizard

By Bobby Stroup

Artificial Intelligence (AI) isn’t magic, but there is value in telling a magical story to non-technical stakeholders to describe how we’ll govern this transformative technology. In fact, President Joe Biden himself could benefit by borrowing from an existing legend.

Let’s pick a story that’s already popular and one where heroes successfully overcame a dangerous technology. That technology should be simple, but also one that embodies the idea of harms caused by design. Focusing on a simple device avoids the distraction of technological details, allowing us to more easily ponder the bigger picture.

With the above parameters in mind, I suggest we discuss the technology of an “evil ring.” No, I don’t mean that Ring. I’m saying we should analogize health care AI to Sauron’s One Ring from J.R.R. Tolkien’s The Lord of the Rings.

Read More

President Joe Biden at desk in Oval Office.

What’s on the Horizon for Health and Biotech with the AI Executive Order

By Adithi Iyer

Last month, President Biden signed an Executive Order mobilizing an all-hands-on-deck approach to the cross-sector regulation of artificial intelligence (AI). One such sector (mentioned, from my search, 33 times) is health/care. This is perhaps unsurprising— the health sector touches almost every other aspect of American life, and of course continues to intersect heavily with technological developments. AI is particularly paradigm-shifting here: the technology already advances existing capabilities in analytics, diagnostics, and treatment development exponentially. This Executive Order is, therefore, as important a development for health care practitioners and researchers as it is for legal experts. Here are some intriguing takeaways:  Read More

Portland, OR, USA - Jan 19, 2022: Emojis sorted by usage frequency are seen in the iMessage app on an iPhone.

Emoji Laws in Health Care ❤️⚖️

By Bobby Stroup

2023 is the year of the emoji lawsuit.  This year a “thumbs-up” 👍 emoji was found to be part of a legally binding contract. In another legal case, a “moon” 🌝 emoji was found to be possible evidence of securities fraud. This legal evolution may seem a bit strange. We are more accustomed to emojis being the characters in a “cinematic masterpiece” or the subject of corporate public relations. However, times are changing, and these symbols now have financial and legal implications in a wide variety of industries. Health care is no exception.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

AI in Digital Health: Autonomy, Governance, and Privacy

The following post is adapted from the edited volume AI in eHealth: Human Autonomy, Data Governance and Privacy in Healthcare.

By Marcelo Corrales Compagnucci and Mark Fenwick

The emergence of digital platforms and related technologies are transforming healthcare and creating new opportunities and challenges for all stakeholders in the medical space. Many of these developments are predicated on data and AI algorithms to prevent, diagnose, treat, and monitor sources of epidemic diseases, such as the ongoing pandemic and other pathogenic outbreaks. However, these opportunities and challenges often have a complex character involving multiple dimensions, and any mapping of this emerging ecosystem requires a greater degree of inter-disciplinary dialogue and more nuanced appreciation of the normative and cognitive complexity of these issues.

Read More

Close up of a computer screen displaying code

Mitigating Bias in Direct-to-Consumer Health Apps

By Sara Gerke and Chloe Reichel

Recently, Google announced a new direct-to-consumer (DTC) health app powered by artificial intelligence (AI) to diagnose skin conditions.

The company met criticism for the app, because the AI was primarily trained on images from people with darker white skin, light brown skin, and fair skin. This means the app may end up over-or under-diagnosing conditions for people with darker skin tones.

This prompts the questions: How can we mitigate biases in AI-based health care? And how can we ensure that AI improves health care, rather than augmenting existing health disparities?

That’s what we asked of our respondents to our In Focus Series on Direct-to-Consumer Health Apps. Read their answers below, and check out their responses to the other questions in the series.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More