Social Inequality in Clinical Research

by Suzanne M. Rivera, PhD

For a variety of reasons, racial and ethnic minorities in the US do not participate in clinical research in numbers proportionate to their representation in the population.  Although legitimate mistrust by minorities of the healthcare system is one reason, institutional barriers and discrimination also contribute to the problem.  The equitable inclusion of minorities in research is important, both so that they receive an equal share of the benefits of research and to ensure that they do not bear a disproportionate burden.

Under-representation is not just a question of fairness in the distribution of research risks.  It also creates burdens for minorities because it leads to poorer healthcare.  Since participation in clinical trials provides extra consultation, more frequent monitoring, and access to state-of-the-art care, study participation can represent a significant advantage over standard medicine.  To the extent that participation in research may offer direct therapeutic value to study subjects, under-representation of minorities denies them, in a systematic way, the opportunity to benefit medically.

For many years, our system for drug development has operated under the assumption that that we can test materials in one kind of prototypical human body and then extrapolate the data about safety and efficacy to all people.  That’s a mistake; the more we learn about how drugs metabolize differently based on genetics and environmental factors, the more important it becomes to account for sub-group safety and efficacy outcomes.  More recently, greater emphasis has been placed on community-based participatory research.  This movement toward sharing decision-making power between the observer and the observed is a critical step for addressing both the subject and researcher sides of the inequality equation.

Research Exceptionalism Diminishes Individual Autonomy

by Suzanne M. Rivera, Ph.D.

One of the peculiar legacies of unethical human experimentation is an impulse to protect people from perceived research risks, even when that means interfering with the ability of potential participants to exercise their own wills.  Fears about the possibility of exploitation and other harms have resulted in a system of research oversight that in some cases prevents people from even having the option to enroll in certain studies because the research appears inherently risky.

Despite the fact that one of the central (some would say, the most important) principles of ethical human research is “respect for persons,” (shorthand: autonomy), our current regulations– and the institutions that enforce them– paradoxically promote an approach to research gate-keeping which emphasizes the prevention of potential harm at the expense of individual freedom.  As a result, research activities often are treated as perils from which unsuspecting recruits should be shielded, either because the recruits themselves are perceived as too vulnerable to make reasoned choices about participation, or based on the premise that no person of sound mind should want to do whatever is proposed.

One example of such liberty-diminishing overprotection is the notion that study participants should not be paid very much for their time or discomfort because to provide ample compensation might constitute undue inducement. Although there is no explicit regulatory prohibition against compensating research participants for their service, The Common Rule requires researchers to “seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence.”  This has been interpreted by many to mean that payment for study participation cannot be offered in amounts greater than a symbolic thank you gesture and bus fare. Read More

When Do Doctors Discount Clinical Trial Results?

by Jonathan J. Darrow

A research study reported today in the New England Journal of Medicine found that physicians are able to discriminate between clinical trials with high levels of rigor versus those with low levels of rigor, as well as between clinical trials that are funded by industry and those that are funded by the government.

The randomized study analyzed the responses of 269 physicians who were presented with hypothetical abstracts of clinical trial findings for three hypothetical drugs.  Abstracts were deliberately crafted to reflect three levels of clinical trial rigor (low, medium, and high), and three types of funding disclosure (no disclosure, National Institutes of Health funding, and pharmaceutical industry funding), yielding 27 abstract types.

The major finding of the study was that physicians are less willing “to believe and act on trial findings, independent of the trial’s quality,” if the trial is funded by industry.  That industry funding led to a decrease in perceived credibility, even for large and well-designed trials, concerned the study authors, who felt that “[t]he methodologic rigor of a trial, not its funding disclosure, should be a primary determinant of its credibility.”

The full article citation is: Aaron S. Kesselheim et al., A Randomized Study of How Physicians Interpret Research Funding Disclosures, 367(12) New Eng. J. Med. 1119 (Sept. 20, 2012). Available here.

[Editorial Note: And within the et al. is Chris Robertson, a former Petrie-Flom Academic Fellow, current prof at University of Arizona, and future guest blogger here at Bill of Health!]

To Tell or Not to Tell: Should Researchers Contact Anonymous Donors to Help Them?

By Cansu Canca

A recent New York Times article drew attention to an issue with increasing importance as technology develops. Gene samples collected under conditions of anonymity reveal more and more information that may be of crucial importance for the subjects or their relatives. Researchers feel a moral obligation to disclose these important findings, which may even be life-saving, to the subjects. Yet, the anonymity clause in the consent forms prevents them from doing so.

Whether or not researchers can or must disclose the information in spite of the anonymity clause mainly turns on two issues: the scope of the informed consent and the reach of the obligation for beneficence.

Read More

New article on managing inherent conflicts in human subjects research

“In Plain Sight: A Solution to a Fundamental Challenge in Human Research”
Journal of Law, Medicine, and Ethics, Forthcoming (Lois Shepard and Margaret Foster Riley, UVA)

From the abstract: The conflict of interest created when physician-researchers combine medical research and treatment is a long-standing and widely recognized ethical challenge of clinical research that has thus far eluded satisfactory solution. A researcher’s obligation to the scientific enterprise not only provides the temptation to ignore the medical needs of subjects in a study, it may provide an obligation, short of actually endangering subjects, to override their medical needs or preferences. Expecting research subjects to protect themselves through informed consent processes is unrealistic, as is expecting physician-researchers to internally navigate this conflict simply by being virtuous. The problem is a structural one that requires a structural solution. People who are receiving medical treatment need a doctor devoted to their care to provide the independent, individualized judgment and advice expected of a physician outside of research. Reliance on other, existing protective mechanisms — institutional review boards, data safety monitoring boards, medical monitors or even the new research subject advocacy programs — falls short, as indeed each of those mechanisms assumes that the subject will be protected and advised by the local investigators, a role they cannot fulfill. For these reasons, we propose that in much clinical research, each research subject should have a doctor independent from the research study.

Treatment of Subject Injury: Fair is Fair

By Suzanne M. Rivera, Ph.D.

Of all the protections provided in the Common Rule to safeguard the rights and welfare of research participants, there’s one glaring omission: treatment of study-related injuries.

Our current regulatory apparatus is silent on whether treatment of injuries incurred while participating in a study ought to be the responsibility of the sponsor, the researcher, or the test subjects.  The closest thing to guidance we are given on this topic in the Common Rule is a requirement that, if the study involves more than minimal risk, the informed consent document must provide, “an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained.”

Note, the regulations do not state that plans must be made to provide treatment at no cost to the participants.  In fact, the regulations don’t say treatment needs to be made available at all.  Thus, it is possible to comply with the letter and spirit of the regulations by stating the following in an informed consent document, “There are no plans to provide treatment if you should be injured or become ill as a result of your participation in this study.”  Or even, “The costs of any treatment of an injury or illness resulting from your participation in this study will be your responsibility.” Read More

Greenpeace Out to Sea on GM Rice Issue

[posted on behalf of Art Caplan]

Greenpeace, perhaps best known for its battles at sea to protect whales and the oceans, has gotten itself involved in a huge controversy over genetically modified food.

The group is charging that unsuspecting children were put at risk in a “dangerous” study of genetically engineered rice in rural China. It’s a serious claim, because it is putting research seeking to put more nutrition into food at risk.

Genetically engineered rice has the potential to help solve a big nutritional problem—vitamin A deficiency.  A lack of vitamin A kills 670,000 kids under 5 every year and causes 250,000 to 500,000 to go blind. Half die within a year of losing their sight, according to the World Health Organization. I think Greenpeace is being ethically irresponsible and putting those lives at continued risk.

Read the rest over at NBCNews Vitals.

Alan Wertheimer at HLS tonight

Short notice, but…

Alan Wertheimer will be presenting his draft paper “Why Is Consent a Requirement for Ethical Research?” tonight at the Health Law Policy and Bioethics Workshop at Harvard Law School.

These workshops take place on selected Mondays from 5-7pm, Hauser Hall, Room 105. This year’s schedule can be found here.  Open to the public – check it out if you’re in town.

Research Participation as a Responsibility of Citizenship

by Suzanne M. Rivera, Ph.D.

For legitimate reasons, the human research enterprise frequently is regarded with suspicion.  Despite numerous rules in place to protect research participants’ rights and welfare, there is a perception that research is inherently exploitative and dangerous.

Consequently, most people don’t participate in research.  This is not only a fairness problem (few people undergo risk and inconvenience so many can benefit from the knowledge derived), but also a scientific problem, in that the results of studies based on a relatively homogeneous few may not be representative and applicable to the whole population. Larger numbers of participants would improve statistical significance, allowing us to answer important questions faster and more definitively.  And more heterogeneous subject populations would give us information about variations within and between groups (by age, gender, socio-economic status, ethnicity, etc.).

Put simply, it would be better for everyone if we had a culture that promoted research participation, whether active (like enrolling in a clinical trial) or passive (like allowing one’s data or specimens to be used for future studies), as an honorable duty.    (Of course, this presumes the research is done responsibly and in a manner consistent with ethical and scientific standards, and the law.) Read More

Broadening “Innovation Law & Policy” (and “Human Subjects Research”)

By Michelle Meyer

In legal scholarship and education, innovation law and policy is virtually synonymous with intellectual property in general, and with patent law in particular. This is curious and, I think, misguided. We expend considerable effort designing optimal incentives for innovation. We expend similar effort ensuring that socially useful knowledge, once produced, is widely and accurately disseminated. But if knowledge-producing activities themselves are suboptimally regulated, neither upstream incentives to engage in them nor downstream mechanisms to disseminate their fruits will much matter.

In Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem, I

critically examine[] that regulatory framework, adopted by more than one dozen federal agencies in the U.S. and many other countries, which governs the vast majority of those knowledge-producing activities that have the greatest potential to affect human welfare: research involving human beings, or “human subjects research” (HSR). [The Article] focuses on the primary actors in the regulation of HSR — licensing committees called Institutional Review Boards (IRBs) which, before each study may proceed, must find that its risks to participants are “reasonable in relation to” its expected benefits for both participants and society. It argues for a particular interpretation of this risk-benefit standard and, drawing on scholarship in psychology, economics, neuroscience and other fields, argues that participant heterogeneity prevents IRBs from carrying out their regulatory duty. Instead, the regulatory system implicitly responds to the heterogeneity problem with risk aversion that is costly not only to researchers and society but, critically, to would-be research participants. The Article concludes by laying out the policy options that remain in the wake of the heterogeneity problem’s intractability: continuing the legal fiction of risk-benefit analysis, honestly embracing the heterogeneity problem and its costs, or jettisoning IRB risk-benefit analysis. A companion Article develops the possibility of the third option.

HSR is not, of course, unknown to the legal academy. Read More