green, red, and yellow qr codes on phones.

The Promise and Pitfalls of China’s QR Codes as Health Certificates

This article is adapted from a longer paper published in the Harvard Journal of Law and Technology (JOLT)’s Digest section. To access the original paper, please visit JOLT.

By April Xiaoyi Xu

At this point in the COVID-19 pandemic, China has successfully managed to contain the spread of the virus, due in large part to its technological strategy, which uses QR codes as a kind of health certificate.

These color-coded QR codes are automatically generated using cell phone data. Green indicates that an individual is healthy and can move freely, yellow signals that the user must quarantine for up to seven days, and red for fourteen days. The basis for these determinations, as well as the extent of the data collected in order to make them, remains opaque.

Read More

Apple watch and fit bit.

Beyond HIPAA: A Proposed Self-Policing Framework for Digital Health Products

By Vrushab Gowda

As digital health products proliferate, app developers, hardware manufacturers, and other entities that fall outside Health Insurance Portability and Accountability Act (HIPAA) regulation are collecting vast amounts of biometric information. This burgeoning market has spurred patient privacy and data stewardship concerns.

To this end, two policy nonprofits – the Center for Democracy and Technology (CDT) and the eHealth Initiative (eHI) – earlier this month jointly published a document detailing self-regulatory guidelines for industry. The following piece traces the development of the “Proposed Consumer Privacy Framework for Health Data,” provides an overview of its provisions, and offers critical analysis.

Read More

phone camera

Deep Phenotyping Could Help Solve the Mental Health Care Crisis

By Justin T. Baker

The United States faces a growing mental health crisis and offers insufficient means for individuals to access care.

Digital technologies — the phone in your pocket, the camera-enabled display on your desk, the “smart” watch on your wrist, and the smart speakers in your home — might offer a path forward.

Deploying technology ethically, while understanding the risks of moving too fast (or too slow) with it, could radically extend our limited toolkit for providing access to high-quality care for the many individuals affected by mental health issues for whom the current mental health system is either out of reach or otherwise failing to meet their need.

Read More

Life preserver on boat.

Incidental Findings in Deep Phenotyping Research: Legal and Ethical Considerations

By Amanda Kim, M.D., J.D., Michael Hsu, M.D., Amanda Koire, M.D., Ph.D., Matthew L. Baum, M.D., Ph.D., D.Phil.

What obligations do researchers have to disclose potentially life-altering incidental findings (IFs) as they happen in real time?

Deep phenotyping research in psychiatry integrates an individual’s real-time digital footprint (e.g., texts, GPS, wearable data) with their biomedical data (e.g., genetic, imaging, other biomarkers) to discover clinically relevant patterns, usually with the aid of machine learning. Findings that are incidental to the study’s objectives, but that may be of great importance to participants, will inevitably arise in deep phenotyping research.

The legal and ethical questions these IFs introduce are fraught. Consider three hypothetical cases below of individuals who enroll in a deep phenotyping research study designed to identify factors affecting risk of substance use relapse or overdose:

A 51-year-old woman with alcohol use disorder (AUD) is six months into sobriety. She is intrigued to learn that the study algorithm will track her proximity to some of her known triggers for alcohol relapse (e.g., bars, liquor stores), and asks to be warned with a text message when nearby so she can take an alternative route. Should the researchers share that data?

A 26-year-old man with AUD is two years into sobriety. Three weeks into the study, he relapses. He begins arriving to work inebriated and loses his job. After the study is over, he realizes the researchers may have been able to see from his alcohol use surveys, disorganized text messages, GPS tracking, and sensor data that he may have been inebriated at work, and wishes someone had reached out to him before he lost his job. Should they have?

A 35-year-old man with severe opioid use disorder experiences a near-fatal overdose and is discharged from the hospital. Two weeks later, his smartphone GPS is in the same location as his last overdose, and his wearable detects that his respiratory rate has plummeted. Should researchers call EMS? Read More

Pen hovering over words "I agree" with check box next to it.

Unique Challenges to Informed Consent in Deep Phenotyping Research

By Benjamin C. Silverman

Deep phenotyping research procedures pose unique challenges to the informed consent process, particularly because of the passive and boundless nature of the data being collected and how this data collection overlaps with our everyday use of technology.

As detailed elsewhere in this symposium, deep phenotyping in research involves the collection and analysis of multiple streams of behavioral (e.g., location, movement, communications, etc.) and biological (e.g., imaging, clinical assessments, etc.) data with the goal to better characterize, and eventually predict or intervene upon, a number of clinical conditions.

Obtaining voluntary competent informed consent is a critical aspect to conducting ethical deep phenotyping research. We will address here several challenges to obtaining informed consent in deep phenotyping research, and describe some best practices and relevant questions to consider.

Read More

Person typing on computer.

Lessons Learned from Deep Phenotyping Patients with Rare Psychiatric Disorders

By Catherine A Brownstein and Joseph Gonzalez-Heydrich

Given the potential sensitivities associated with describing (i.e., phenotyping) patients with potentially stigmatizing psychiatric diagnoses, it is important to acknowledge and respect the wishes of the various parties involved.

The phenotypic description and depiction of a patient in the literature, although deidentified, may still be of great impact to a family.

By way of example, a novel genetic variant was identified as a likely explanation for the clinical presentation of a patient in a large cohort of individuals with neurodevelopmental and/or psychiatric phenotypes, a finding of great medical interest. The research team elected to further study this candidate and collected samples for functional evaluation of the gene variant and preparation of a case report.

Because the patient had a complicated phenotype, several physicians from various specialties were involved in the patient’s care. The paper draft was circulated amongst the collaborating clinicians and researchers and ultimately shared with the patient’s family by one of their involved caregivers. This is typically not a requirement of such studies, as the informed consent process includes the subjects’ understanding and consent for dissemination of deidentified results in the scientific literature. But as a general practice, families are informed about manuscripts in process, and in this case the family had requested to be kept abreast of ongoing developments.

Read More

doctor holding clipboard.

“Actionability” and the Ethics of Communicating Results to Study Participants

By Patrick Monette

To what end does a physician have a responsibility toward a research participant? Specifically, what data may be considered “actionable” for the physician to disclose to the patient, and when and how might this be done?

In the clinical setting, contemporary medical ethics address a physician’s “fiduciary responsibility.” That is, there is a well-established professional expectation that the physician will place the patient’s interests above their own and advocate for their welfare. This post focuses on an alternative dyad, that of physician and research participant, to explore how the field has broached the topic of actionability in the setting of clinical research. Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

Person typing on computer.

Online Terms of Use for Genealogy Websites – What’s in the Fine Print?

By Jorge L. Contreras

Since genealogy websites first went online, researchers have been using the data that they contain in large-scale epidemiological and population health studies. In many cases, data is collected using automated tools and analyzed using sophisticated algorithms.

These techniques have supported a growing number of discoveries and scientific papers. For example, researchers have used this data to identify genetic markers for Alzheimer’s Disease, to trace an inherited cancer syndrome back to a single German couple born in the 1700s, and to gain a better understanding of longevity and family dispersion.  In the last of these studies, researchers analyzed family trees from 86 million individual genealogy website profiles.

Despite the scientific value of publicly-available genealogy website information, and its free accessibility via the Internet, it is not always the case that this data can be used for research without the permission of the site operator or the individual data subjects.

In fact, the online terms of use (TOU) for genealogy websites may restrict or prohibit the types of uses for data found on those sites.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Insufficient Protections for Health Data Privacy: Lessons from Dinerstein v. Google

By Jenna Becker

A data privacy lawsuit against the University of Chicago Medical Center and Google was recently dismissed, demonstrating the difficulty of pursuing claims against hospitals that share patient data with tech companies.

Patient data sharing between health systems and large software companies is becoming increasingly common as these organizations chase the potential of artificial intelligence and machine learning in healthcare. However, many tech firms also own troves of consumer data, and these companies may be able to match up “de-identified” patient records with a patient’s identity.

Scholars, privacy advocates, and lawmakers have argued that HIPAA is inadequate in the current landscape. Dinerstein v. Google is a clear reminder that both HIPAA and contract law are insufficient for handling these types of privacy violations. Patients are left seemingly defenseless against their most personal information being shared without their meaningful consent.

Read More