Life preserver on boat.

Incidental Findings in Deep Phenotyping Research: Legal and Ethical Considerations

By Amanda Kim, M.D., J.D., Michael Hsu, M.D., Amanda Koire, M.D., Ph.D., Matthew L. Baum, M.D., Ph.D., D.Phil.

What obligations do researchers have to disclose potentially life-altering incidental findings (IFs) as they happen in real time?

Deep phenotyping research in psychiatry integrates an individual’s real-time digital footprint (e.g., texts, GPS, wearable data) with their biomedical data (e.g., genetic, imaging, other biomarkers) to discover clinically relevant patterns, usually with the aid of machine learning. Findings that are incidental to the study’s objectives, but that may be of great importance to participants, will inevitably arise in deep phenotyping research.

The legal and ethical questions these IFs introduce are fraught. Consider three hypothetical cases below of individuals who enroll in a deep phenotyping research study designed to identify factors affecting risk of substance use relapse or overdose:

A 51-year-old woman with alcohol use disorder (AUD) is six months into sobriety. She is intrigued to learn that the study algorithm will track her proximity to some of her known triggers for alcohol relapse (e.g., bars, liquor stores), and asks to be warned with a text message when nearby so she can take an alternative route. Should the researchers share that data?

A 26-year-old man with AUD is two years into sobriety. Three weeks into the study, he relapses. He begins arriving to work inebriated and loses his job. After the study is over, he realizes the researchers may have been able to see from his alcohol use surveys, disorganized text messages, GPS tracking, and sensor data that he may have been inebriated at work, and wishes someone had reached out to him before he lost his job. Should they have?

A 35-year-old man with severe opioid use disorder experiences a near-fatal overdose and is discharged from the hospital. Two weeks later, his smartphone GPS is in the same location as his last overdose, and his wearable detects that his respiratory rate has plummeted. Should researchers call EMS?

These vignettes highlight several unique features of incidental findings in deep phenotyping research. They are specific – i.e., events like inebriation are contextualized by where, when, and how long. They are actionable, because the observations are trackable in real time. They are numerous, because they multiply with each data-stream. They are probabilistic, because they are deduced from multiple observations and by evolving algorithms.

We aim to highlight a few legal and ethical considerations of these features, as much has already been written about IFs generally.

Legal considerations: mandatory reporting statutes, duty to rescue, and tort law

The specificity enabled by cross-linking multiple data streams may lead to certain specific situations (e.g., impaired driver, perinatal substance use) that could cause researchers to wonder whether mandatory reporting statutes might apply. For example, with respect to substance use disorders, 25 U.S. states require healthcare professionals to report suspected prenatal drug use, eight states require doctors to test for prenatal drug exposure if they suspect drug use, 23 states consider substance use during pregnancy to be child abuse, and three consider it grounds for civil commitment.  However, such obligations in most states apply specifically to members of certain professions acting in their professional capacity. It would be unlikely that researchers – even physician-researchers – acting in a research capacity would be within their scope. (One exception might be mandatory reporting of child abuse, as 18 states require all persons to report suspected child abuse or neglect, regardless of profession.)

The actionability of real-time data, sometimes of serious and urgent events like overdose, might cause researchers to wonder whether they have legal duty to rescue or to aid. While affirmative legal duties to rescue exist in other countries, no such duty exists in the U.S., and as researchers are not traditionally considered fiduciaries, no legal obligation to act in the participant’s best interests (i.e., by disclosing) would arise from the researcher-participant relationship. However, if the researcher is also a participant’s physician (which can and does occur with frequency in clinical research contexts), this analysis would become more complex as the physician-researcher would have a fiduciary duty to the participant, and depending on the situation, would have a duty to disclose relevant and significant IFs.

The specificity of the incidental findings could potentially be brought forward to support a cause of action under the tort of ordinary negligence, however, if a participant could prove that 1) the researchers owed a duty to report, 2) that duty was breached, and 3) failure to report was the “but for” cause of the participant’s injury. To determine whether there was a duty that was breached, courts would look to what the standard of care is. The success of such a claim would likely ultimately depend on whether there is a consensus on the prevailing professional standard of care for the disclosure of IFs. No such consensus currently exists in deep phenotyping, and such standards currently vary even in well-established fields like genomics. There would also likely be much jurisdictional variation.

Ethical considerations: how good a “fit” are guidelines on IFs from other biomedical research?

An emerging consensus from the fields of genomics and medical imaging is that researchers have an ethical obligation to disclose at minimum when the participant consents to disclosure and the disclosure “offers clear medical benefits.”   If we accept that premise, we see that applying this ethical minimum to deep phenotyping is challenging.

First, a single deep phenotyping study would not only have all the potential findings of a genomics and medical imaging study, but also a multiplication with each additional data stream, making anticipation and full consideration of incidental findings during an informed consent conversation unrealistic. Machine learning algorithms, moreover, raise the potential for truly unforeseeable findings.

Second, a “clear medical benefit” is often specified in genomics to depend both on the certainty of the result and the actionability of the result. As the vignettes illustrate, 1) actionability would hardly narrow the list of reportable findings, and 2) certainty of IFs is difficult to determine ad hoc, as it may be dependent on the sensitivity and specificity of an algorithm which the study itself seeks to establish (and may even differ between individuals).

The future of IFs in Deep Phenotyping

The challenge of any new technology is identifying how well it can be handled by existing legal and moral structures. For deep phenotyping, the specificity, actionability, number, and uncertainty of incidental findings merit tailored consideration.

 

Department of Psychiatry, Brigham and Women’s Hospital, Boston, MA

 

This post is part of our Ethical, Legal, and Social Implications of Deep Phenotyping symposium. All contributions to the symposium are available here.

The Petrie-Flom Center Staff

The Petrie-Flom Center staff often posts updates, announcements, and guests posts on behalf of others.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.