3d rendering of a robot working with virtual display.

Artificial Intelligence and Health Law: Updates from England

By John Tingle

Artificial intelligence (AI) is making an impact on health law in England.

The growing presence of AI in law has been chronicled by organizations such as the Law Society, which published a forward-thinking, horizon-scanning paper on artificial intelligence and the legal profession back in 2018.

The report identifies several key emerging strands of AI development and use, including Q&A chatbots, document analysis, document delivery, legal adviser support, case outcome prediction, and clinical negligence analysis. These applications of AI already show promise: one algorithm developed by researchers at University College London, the University of Sheffield, and the University of Pennsylvania was able to predict case outcomes with 79% accuracy.

Read More

Illustration of multicolored profiles. An overlay of strings of ones and zeroes is visible

We Need to Do More with Hospitals’ Data, But There Are Better Ways

By Wendy Netter Epstein and Charlotte Tschider

This May, Google announced a new partnership with national hospital chain HCA Healthcare to consolidate HCA’s digital health data from electronic medical records and medical devices and store it in Google Cloud.

This move is the just the latest of a growing trend — in the first half of this year alone, there have been at least 38 partnerships announced between providers and big tech. Health systems are hoping to leverage the know-how of tech titans to unlock the potential of their treasure troves of data.

Health systems have faltered in achieving this on their own, facing, on the one hand, technical and practical challenges, and, on the other, political and ethical concerns.

Read More

Close up of a computer screen displaying code

Top Health Considerations in the European Commission’s ‘Harmonised Rules on Artificial Intelligence’

By Rachele Hendricks-Sturrup

On April 21, 2021, the European Commission released a “first-ever” legal framework on artificial intelligence (AI) in an attempt to address societal risks associated with AI implementation.

The EU has now effectively set a global stage for AI regulation, being the first nation of member states to create a legal framework with specific intent to address or mitigate potentially harmful effects of broad AI implementation.

Within the proposed framework, the Commission touched on a variety of considerations and  “high-risk” AI system scenarios. The Commission defined high-risk AI systems as those that pose significant (material or immaterial) risks to the health and safety or fundamental rights of persons.

This post outlines four key considerations in the proposal with regard to health: 1) prioritizing emergency health care; 2) law enforcement profiling as a social determinant of health; 3) immigrant health risk screening; and 4) AI regulatory sandboxes and a health data space to support AI product commercialization and public health innovation.

Read More

empty hospital bed

Regulatory Gap in Health Tech: Resource Allocation Algorithms

By Jenna Becker

Hospitals use artificial intelligence and machine learning (AI/ML) not only in clinical decision-making, but also to allocate scarce resources.

These resource allocation algorithms have received less regulatory attention than clinical decision-making algorithms, but nevertheless pose similar concerns, particularly with respect to their potential for bias.

Without regulatory oversight, the risks associated with resource allocation algorithms are significant. Health systems must take particular care when implementing these solutions.

Read More

Motherboard, Reverse Detail: This is a green motherboard, photographed with red-gelled flashes.

The Future of Race-Based Clinical Algorithms

By Jenna Becker

Race-based clinical algorithms are widely used. Yet many race-based adjustments lack evidence and worsen racism in health care. 

Prominent politicians have called for research into the use of race-based algorithms in clinical care as part of a larger effort to understand the public health impacts of structural racism. Physicians and researchers have called for an urgent reconsideration of the use of race in these algorithms. 

Efforts to remove race-based algorithms from practice have thus far been piecemeal. Medical associations, health systems, and policymakers must work in tandem to rapidly identify and remove racist algorithms from clinical practice.

Read More

Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

Read More

AI concept art.

A Closer Look at FDA’s Newly Released AI/ML Action Plan

By Vrushab Gowda

The U.S. Food and Drug Administration (FDA or “the Agency”) recently issued its long awaited AI/ML (Artificial Intelligence/Machine Learning) Action Plan.

Announced amid the closing days of Stephen Hahn’s term as Commissioner, it takes steps toward establishing a dedicated regulatory strategy for AI products intended as software as a medical device (SaMD), versus those embedded within physical hardware. The FDA has already approved a number of such products for clinical use; however, AI algorithms’ self-learning capabilities expose the limitations of traditional regulatory pathways.

The Action Plan further outlines the first major objectives of the Digital Health Center of Excellence (DHCoE), which was established to much fanfare but whose early moves have remained somewhat unclear. This document presents a policy roadmap for its years ahead.

Read More

lady justice.

Computational Psychiatry for Precision Sentencing in Criminal Law

By Francis X. Shen

A core failing of the criminal justice system is its inability to individualize criminal sentences and tailor probation and parole to meet the unique profile of each offender.

As legal scholar, and now federal judge Stephanos Bibas has observed, “All too often … sentencing guidelines and statutes act as sledgehammers rather than scalpels.”

As a result, dangerous offenders may be released, while offenders who pose little risk to society are left behind bars. And recidivism is common — the U.S. has an astounding recidivism rate of 80% — in part because the current criminal justice system largely fails to address mental health challenges, which are heavily over-represented in the justice system.

Advances in computational psychiatry, such as the deep phenotyping methods explored in this symposium, offer clinicians newfound abilities to practice precision psychiatry. The idea behind precision psychiatry is both simple and elusive: treat individuals as individuals. Yet advancing such a program in practice is “very ambitious” because no two individual brains — and the experiences those brains have had over a lifetime — are the same.

Deep phenotyping offers the criminal justice system the tools to improve public safety, identify low-risk offenders, and modify decision-making to reduce recidivism. Computational psychiatry can lead to what can be described as precision sentencing.

Read More

phone camera

Deep Phenotyping Could Help Solve the Mental Health Care Crisis

By Justin T. Baker

The United States faces a growing mental health crisis and offers insufficient means for individuals to access care.

Digital technologies — the phone in your pocket, the camera-enabled display on your desk, the “smart” watch on your wrist, and the smart speakers in your home — might offer a path forward.

Deploying technology ethically, while understanding the risks of moving too fast (or too slow) with it, could radically extend our limited toolkit for providing access to high-quality care for the many individuals affected by mental health issues for whom the current mental health system is either out of reach or otherwise failing to meet their need.

Read More

Life preserver on boat.

Incidental Findings in Deep Phenotyping Research: Legal and Ethical Considerations

By Amanda Kim, M.D., J.D., Michael Hsu, M.D., Amanda Koire, M.D., Ph.D., Matthew L. Baum, M.D., Ph.D., D.Phil.

What obligations do researchers have to disclose potentially life-altering incidental findings (IFs) as they happen in real time?

Deep phenotyping research in psychiatry integrates an individual’s real-time digital footprint (e.g., texts, GPS, wearable data) with their biomedical data (e.g., genetic, imaging, other biomarkers) to discover clinically relevant patterns, usually with the aid of machine learning. Findings that are incidental to the study’s objectives, but that may be of great importance to participants, will inevitably arise in deep phenotyping research.

The legal and ethical questions these IFs introduce are fraught. Consider three hypothetical cases below of individuals who enroll in a deep phenotyping research study designed to identify factors affecting risk of substance use relapse or overdose:

A 51-year-old woman with alcohol use disorder (AUD) is six months into sobriety. She is intrigued to learn that the study algorithm will track her proximity to some of her known triggers for alcohol relapse (e.g., bars, liquor stores), and asks to be warned with a text message when nearby so she can take an alternative route. Should the researchers share that data?

A 26-year-old man with AUD is two years into sobriety. Three weeks into the study, he relapses. He begins arriving to work inebriated and loses his job. After the study is over, he realizes the researchers may have been able to see from his alcohol use surveys, disorganized text messages, GPS tracking, and sensor data that he may have been inebriated at work, and wishes someone had reached out to him before he lost his job. Should they have?

A 35-year-old man with severe opioid use disorder experiences a near-fatal overdose and is discharged from the hospital. Two weeks later, his smartphone GPS is in the same location as his last overdose, and his wearable detects that his respiratory rate has plummeted. Should researchers call EMS? Read More