Image from the movie Groundhog Day where man is looking at a groundhog

Regulating Human Subject Research: Like Being an Unwilling Participant in a Very Nerdy Version of Groundhog Day

This is the first in a series of blog posts highlighting how I am developing an overview of some of the significant gaps in regulation of human subject research. I will be looking at the use of living human beings as participants in experiments conducted by academics to advance knowledge, as well as by biomedical companies to test new products.

My goal is two-fold: I hope to develop the first Nutshell in Human Subject Regulation as a valuable handbook for those studying and working in this field. In addition, I hope to bring together two issues that are often separate: providing greater protection to people who participate in research as subjects, and increasing the quality of the information generated.

Introduction to Human Subject Research Regulation

The United States protects people who participate in research studies conducted, funded, or under the supervision of the federal government through the provisions of 45 CFR 46. Enacted in 1974 and called the Common Rule because it has been adopted by throughout most of the federal government, the regulations adopt the ethical principles established in the Belmont Report, which was commissioned by Congress for reviewing proposed experiments in advance, as well as establishing on-going mechanisms for review.

The Belmont Report recognized three primary principles for evaluating all human subject research. Those conducting the research study must ensure that the people who participate do so voluntarily, that they not be asked to take on unreasonable risk, and that when there is risk, it is distributed fairly and not imposed on populations with less power to protect themselves. By recognizing that the “harm” to participants extends beyond physical or mental damage, the legal structure of human subject protection keeps its focus on ethical analysis. As a result, it is a field of study where lawyers (Rebecca Dresser), ethicists (such as Alex John London), and doctors (John Lantos) mix freely. Essentially, the law requires researchers to understand and apply ethical principles at every stage of the experiment.

After much delay, the Common Rule has been revised, updated, and in some areas relaxed. This was certainly long overdue. In 1974, mapping the human genome was a fantasy, hospitals kept their records in paper files, and researchers communicated with each other by letter (with stamps).

Today, genetic research is universal, privacy is nonexistent, confidential data breaches can affect millions of people, and labs across the globe work together in real time. The new regulations have been, mostly, welcomed by both researchers and the vast support network of compliance professionals. Although, like anything new, it is far too soon to tell whether they will, as promised, make the process of research safer, faster, and cheaper.

Certainly, the updated regulations will make a significant difference for those in the social sciences who chafed at rules established primarily for biomedical research. But for those whose lives are affected by research, either as participants or as consumers of the products or practices that emerge from their findings, the much larger issue is the leaky patchwork of the existing regulatory system, and the ways it still leaves participants vulnerable to harm. There is no effective mechanism for assessing the quality of the information generated by those consumers.

Shouldn’t we worry when the journals publishing research results have to hire their own watchdogs to find fraudulent research or evidence of misconduct?

The Problem of Rear View Mirror Regulation

First, we must address the fundamental inability to study the research subjects protected by the Common Rule. This is because there is no central registry of research subjects, let alone of research related injuries. Indeed, the information about the experience of subjects in any particular study is so shielded from scrutiny that often we only learn about problematic studies after they have been concluded — and only through the press or through public minded entities like retraction watch or ProPublica. This raises problems both in making an independent assessment of the reported results, as well as the effectiveness of regulations intended to protect the participants.

The problem is not just lack of timely information, but also that when we do hear of studies where people were harmed, the failure to protect them seems so blatant as to defy efforts at crafting more effective protections. Yet, while there are certainly many examples of rogue researchers engaging in secretive and clearly dangerous studies, much more common are accounts of studies conducted by dedicated scientists at highly respected institutions, which were reviewed appropriately and government funded. Why were those involved so blind to what seems, in hindsight, to be obvious dangers? What factors make contemporary self-regulation so difficult?

This conundrum of “good” and “smart” people repeatedly doing risky and harmful things is what makes the task of assessing current protective schemes and proposing new ones like that most cliched of popular culture tropes: being caught in a time loop.

Whether the reference is to Groundhog Day, Legends of Tomorrow, or the latest, Russian Doll, the regular revelation of past dangerous and unethical studies should warn us that we do not always appreciate the signs of trouble when we see it.

At least part of the problem is cultural. Research is usually conducted in highly hierarchical settings where asking questions, let alone engaging in whistleblowing, is a career-ending activity.

Another barrier comes from financial and professional pressures to produce positive results that will ensure continued funding or the marketing of a successful product.

We also must recognize that equally qualified and highly ethical peers hold sharply differing views.

Finally, we must consider the inherent imbalance in knowledge and understanding between researchers and potential research participants makes the concept of “informed consent” a shaky base on which to build an effective system of protection. I look forward to raising other issues throughout the next few months in subsequent blog posts, and I am very grateful for the opportunity to do so as a Visiting Scholar at the Petrie-Flom Center this spring.

 

 

Jennifer S. Bard

Jennifer S. Bard is a professor of law at the University of Cincinnati College of Law where she also holds an appointment as professor in the Department of Internal Medicine at the University of Cincinnati College of Medicine. Prior to joining the University of Cincinnati, Bard was associate vice provost for academic engagement at Texas Tech University and was the Alvin R. Allison Professor of Law and director of the Health Law and JD/MD program at Texas Tech University School of Law. From 2012 to 2013, she served as associate dean for faculty research and development at Texas Tech Law.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.