Pen hovering over words "I agree" with check box next to it.

Unique Challenges to Informed Consent in Deep Phenotyping Research

By Benjamin C. Silverman

Deep phenotyping research procedures pose unique challenges to the informed consent process, particularly because of the passive and boundless nature of the data being collected and how this data collection overlaps with our everyday use of technology.

As detailed elsewhere in this symposium, deep phenotyping in research involves the collection and analysis of multiple streams of behavioral (e.g., location, movement, communications, etc.) and biological (e.g., imaging, clinical assessments, etc.) data with the goal to better characterize, and eventually predict or intervene upon, a number of clinical conditions.

Obtaining voluntary competent informed consent is a critical aspect to conducting ethical deep phenotyping research. We will address here several challenges to obtaining informed consent in deep phenotyping research, and describe some best practices and relevant questions to consider.

Read More

Person typing on computer.

Lessons Learned from Deep Phenotyping Patients with Rare Psychiatric Disorders

By Catherine A Brownstein and Joseph Gonzalez-Heydrich

Given the potential sensitivities associated with describing (i.e., phenotyping) patients with potentially stigmatizing psychiatric diagnoses, it is important to acknowledge and respect the wishes of the various parties involved.

The phenotypic description and depiction of a patient in the literature, although deidentified, may still be of great impact to a family.

By way of example, a novel genetic variant was identified as a likely explanation for the clinical presentation of a patient in a large cohort of individuals with neurodevelopmental and/or psychiatric phenotypes, a finding of great medical interest. The research team elected to further study this candidate and collected samples for functional evaluation of the gene variant and preparation of a case report.

Because the patient had a complicated phenotype, several physicians from various specialties were involved in the patient’s care. The paper draft was circulated amongst the collaborating clinicians and researchers and ultimately shared with the patient’s family by one of their involved caregivers. This is typically not a requirement of such studies, as the informed consent process includes the subjects’ understanding and consent for dissemination of deidentified results in the scientific literature. But as a general practice, families are informed about manuscripts in process, and in this case the family had requested to be kept abreast of ongoing developments.

Read More

doctor holding clipboard.

“Actionability” and the Ethics of Communicating Results to Study Participants

By Patrick Monette

To what end does a physician have a responsibility toward a research participant? Specifically, what data may be considered “actionable” for the physician to disclose to the patient, and when and how might this be done?

In the clinical setting, contemporary medical ethics address a physician’s “fiduciary responsibility.” That is, there is a well-established professional expectation that the physician will place the patient’s interests above their own and advocate for their welfare. This post focuses on an alternative dyad, that of physician and research participant, to explore how the field has broached the topic of actionability in the setting of clinical research. Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

Read More

Person typing on computer.

Online Terms of Use for Genealogy Websites – What’s in the Fine Print?

By Jorge L. Contreras

Since genealogy websites first went online, researchers have been using the data that they contain in large-scale epidemiological and population health studies. In many cases, data is collected using automated tools and analyzed using sophisticated algorithms.

These techniques have supported a growing number of discoveries and scientific papers. For example, researchers have used this data to identify genetic markers for Alzheimer’s Disease, to trace an inherited cancer syndrome back to a single German couple born in the 1700s, and to gain a better understanding of longevity and family dispersion.  In the last of these studies, researchers analyzed family trees from 86 million individual genealogy website profiles.

Despite the scientific value of publicly-available genealogy website information, and its free accessibility via the Internet, it is not always the case that this data can be used for research without the permission of the site operator or the individual data subjects.

In fact, the online terms of use (TOU) for genealogy websites may restrict or prohibit the types of uses for data found on those sites.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Insufficient Protections for Health Data Privacy: Lessons from Dinerstein v. Google

By Jenna Becker

A data privacy lawsuit against the University of Chicago Medical Center and Google was recently dismissed, demonstrating the difficulty of pursuing claims against hospitals that share patient data with tech companies.

Patient data sharing between health systems and large software companies is becoming increasingly common as these organizations chase the potential of artificial intelligence and machine learning in healthcare. However, many tech firms also own troves of consumer data, and these companies may be able to match up “de-identified” patient records with a patient’s identity.

Scholars, privacy advocates, and lawmakers have argued that HIPAA is inadequate in the current landscape. Dinerstein v. Google is a clear reminder that both HIPAA and contract law are insufficient for handling these types of privacy violations. Patients are left seemingly defenseless against their most personal information being shared without their meaningful consent.

Read More

Picture of doctor neck down using an ipad with digital health graphics superimposed

Is Data Sharing Caring Enough About Patient Privacy? Part II: Potential Impact on US Data Sharing Regulations

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

Earlier, we discussed the new suit filed against Google, the University of Chicago (UC), and UChicago Medicine, focusing on the disclosure of patient data from UC to Google. This piece goes beyond the background to consider the potential impact of this lawsuit, in the U.S., as well as placing the lawsuit in the context of other trends in data privacy and security.

Read More

Image of binary and dna

Is Data Sharing Caring Enough About Patient Privacy? Part I: The Background

By Timo Minssen (CeBIL, UCPH), Sara Gerke & Carmel Shachar

A recent US lawsuit highlights crucial challenges at the interface of data utility, patient privacy & data misuse

The huge prospects of artificial intelligence and machine learning (ML), as well as the increasing trend toward public-private partnerships in biomedical innovation, stress the importance of an effective governance and regulation of data sharing in the health and life sciences. Cutting-edge biomedical research strongly demands high-quality data to ensure safe and effective health products. It is often argued that greater access to individual patient data collections stored in hospitals’ medical records systems may considerably advance medical science and improve patient care. However, as public and private actors attempt to gain access to such high-quality data to train their advanced algorithms, a number of sensitive ethical and legal aspects also need to be carefully considered. Besides giving rise to safety, antitrust, trade secrets, and intellectual property issues, such practices have resulted in serious concerns with regard to patient privacy, confidentiality, and the commitments made to patients via appropriate informed consent processes.

Read More

Graphical image of a genetic screen merging with algorithmic code

Do You Own Your Genetic Test Results? What About Your Temperature?

By Jorge L. Contreras

The popular direct-to-consumer genetic testing site AncestryDNA claims that “You always maintain ownership of your data.” But is this true?  And, if so, what does it mean?

For more than a century, US law has held that data – objective information and facts – cannot be owned as property. Nevertheless, in recent years there have been increasing calls to recognize property interests in individual health information. Inspired by high profile data breaches and skullduggery by Facebook and others, as well as ever more frequent stories of academic research misconduct and pharmaceutical industry profiteering, many bioethicists and patient advocates, seeking to bolster personal privacy and autonomy, have argued that property rights should be recognized in health data. In addition, a new crop of would-be data intermediaries (e.g., Nebula Genomics, Genos, Invitae, LunaDNA and Hu.manity.org) has made further calls to propertize health data, presumably to profit from acting as the go-betweens in what has been estimated to be a $60-$100 billion global market in health data. Read More