Code on computer.

Building Trust Through Transparency? FDA Regulation of AI/ML-Based Software

By Jenna Becker

To generate trust in artificial intelligence and machine learning (AI/ML)-based software used in health care, the U.S. Food and Drug Administration (FDA) intends to regulate this technology with an eye toward user transparency. 

But will transparency in health care AI actually build trust among users? Or will algorithm explanations go ignored? I argue that individual algorithm explanations will likely do little to build trust among health care AI users.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

White jigsaw puzzle as a human brain on blue. Concept for Alzheimer's disease.

Detecting Dementia

Cross-posted, with slight modification, from Harvard Law Today, where it originally appeared on November 21, 2020. 

By Chloe Reichel

Experts gathered last month to discuss the ethical, social, and legal implications of technological advancements that facilitate the early detection of dementia.

“Detecting Dementia: Technology, Access, and the Law,” was hosted on Nov. 16 as part of the Project on Law and Applied Neuroscience, a collaboration between the Center for Law, Brain and Behavior at Massachusetts General Hospital and the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

The event, organized by Francis X. Shen ’06 Ph.D. ’08, the Petrie-Flom Center’s senior fellow in Law and Applied Neuroscience and executive director of the Center for Law, Brain and Behavior at Massachusetts General Hospital, was one of a series hosted by the Project on Law and Applied Neuroscience on aging brains.

Early detection of dementia is a hopeful prospect for the treatment of patients, both because it may facilitate early medical intervention, as well as more robust advance care planning.

Read More

AI concept art.

Health Care AI in Pandemic Times

By Jenna Becker

The early days of the COVID-19 pandemic was met by the rapid rollout of artificial intelligence tools to diagnose the disease and identify patients at risk of worsening illness in health care settings.

Understandably, these tools generally were released without regulatory oversight, and some models were deployed prior to peer review. However, even after several months of ongoing use, several AI developers still have not shared their testing results for external review. 

This precedent set by the pandemic may have a lasting — and potentially harmful — impact on the oversight of health care AI.

Read More

AI concept art.

AI’s Legitimate Interest: Video Preview with Charlotte Tschider

The Health Law Policy, Bioethics, and Biotechnology Workshop provides a forum for discussion of new scholarship in these fields from the world’s leading experts.

The workshop is led by Professor I. Glenn Cohen, and presenters come from a wide range of disciplines and departments.

In this video, Charlotte Tschider gives a preview of her paper, “AI’s Legitimate Interest: Towards a Public Benefit Privacy Model,” which she will present at the Health Law Policy workshop on November 9, 2020. Watch the full video below:

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Insufficient Protections for Health Data Privacy: Lessons from Dinerstein v. Google

By Jenna Becker

A data privacy lawsuit against the University of Chicago Medical Center and Google was recently dismissed, demonstrating the difficulty of pursuing claims against hospitals that share patient data with tech companies.

Patient data sharing between health systems and large software companies is becoming increasingly common as these organizations chase the potential of artificial intelligence and machine learning in healthcare. However, many tech firms also own troves of consumer data, and these companies may be able to match up “de-identified” patient records with a patient’s identity.

Scholars, privacy advocates, and lawmakers have argued that HIPAA is inadequate in the current landscape. Dinerstein v. Google is a clear reminder that both HIPAA and contract law are insufficient for handling these types of privacy violations. Patients are left seemingly defenseless against their most personal information being shared without their meaningful consent.

Read More

Picture of doctor neck down using an ipad with digital health graphics superimposed

Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

robotic hand placing a metal cylinder in the matching hold in a wooden box

Machine Learning as the Enemy of Science? Not Really.

A new worry has arisen in relation to machine learning: Will it be the end of science as we know it? The quick answer is, no, it will not. And here is why.

Let’s start by recapping what the problem seems to be. Using machine learning, we are increasingly more able to make better predictions than we can by using the tools of traditional scientific method, so to speak. However, these predictions do not come with causal explanation. In fact, the more complex the algorithms become—as we move deeper into deep neural networks—the better are the predictions and the worse are the explicability. And thus “if prediction is […] the primary goal of science” as some argue, then the pillar of scientific method—understanding of phenomena—becomes superfluous and machine learning seems to be a better tool for science than scientific method.

But is this really the case? This argument makes two assumptions: (1) The primary goal of science is prediction and once a system is able to make accurate predictions, the goal of science is achieved; and (2) machine learning conflicts with and replaces the scientific method. I argue that neither of these assumptions hold. The primary goal of science is more than just prediction—it certainly includes explanation of how things work. And moreover, machine learning in a way makes use of and complements the scientific method, not conflicts with it.

Here is an example to explain what I mean. Prediction through machine learning is used extensively in healthcare. Algorithms are developed to predict hospital readmissions at the time of discharge or to predict when a patient’s condition will take a turn for worse. This is fantastic because these are certainly valuable pieces of information and it has been immensely difficult to make accurate predictions in these areas. In that sense, machine learning methodology indeed surpasses the traditional scientific method in predicting these outcomes. However, this is neither the whole story nor the end of the story. Read More