Conflicting Interests in Research: Don’t Assume a Few Bad Apples Are Spoiling the Bunch

by Suzanne M. Rivera, Ph.D.

In August of 2011, the Public Health Service updated its rules to address the kind of financial conflicts of interests that can undermine (or appear to undermine) integrity in research.  The new rules, issued under the ungainly title, “Responsibility of Applicants for Promoting Objectivity in Research for which Public Health Service Funding is Sought and Responsible Prospective Contractors,” were issued with a one-year implementation period to give universities and academic medical centers sufficient time to update their local policies and procedures for disclosure, review, and management (to the extent possible) of any conflicts their researchers might have between their significant personal financial interests and their academic and scholarly activities.

The rules were made significantly more strict because a few scoundrels (for examples, click here, and here) have behaved in ways that undermined the public’s trust in scientists and physicians. By accepting hundreds of thousands, even millions, of dollars from private pharmaceutical companies and other for-profit entities while performing studies on drugs and devices manufactured by the same companies, a few bad apples have called into question the integrity of the whole research enterprise.  This is a tremendous shame.

Having more than one interest is not bad or wrong; it’s normal.  Everyone has an attachment to the things they value, and most people value more than one thing.  Professors value their research, but they also want accolades, promotion, academic freedom, good parking spots, and food on their tables.  Having multiple interests only becomes a problem when the potential for personal enrichment or glory causes someone (consciously or unconsciously) to behave without integrity and compromise the design, conduct, or reporting of their research. Read More

Fear of a Digital Planet

by Suzanne M. Rivera

Federal regulations and ethical principles require that Institutional Review Boards (IRBs) consider the anticipated risks of a proposed human research study in light of any potential benefits (for subjects or others) before granting authorization for its performance.  This is required because, prior to the oversight required by regulation, unethical researchers exposed subjects to high degrees of risk without sufficient scientific and ethical justification.

Although the physical risks posed by clinical research are fairly well understood, so-called “informational risks”—risks of privacy breaches or violations of confidentiality— are the source of great confusion and controversy.  How do you quantify the harm that comes from a stolen, but encrypted, laptop full of study data?  Or the potential for embarrassment caused by observations of texted conversations held in a virtual chat room?

IRBs have for years considered the potential magnitude and likelihood of research risks in comparison to those activities and behaviors normally undertaken in regular, everyday life.  But everyday life in today’s digital world is very different from everyday life in 1981 when the regulations were implemented.  People share sonogram images on Facebook, replete with the kinds of information that would, in a research context, constitute a reportable breach under the Office of Civil Rights’ HIPAA Privacy Rule.  They also routinely allow their identities, locations, and other private information to be tracked, stored, and shared in exchange for “free” computer applications downloaded to smart phones, GPS devices, and tablet computers. Read More

Reality Check, Please!

by Suzanne M. Rivera, Ph.D.

By now, many people have seen a still photo or video footage of Rep. Paul Broun (R-Georgia), standing in front of a wall full of deer heads, proclaiming that evolution, embryology, and the Big Bang theory are “lies straight from the pit of hell.”  According to the Congressman, “it’s lies to try to keep me and all the folks who were taught that from understanding that they need a savior.”

Readers might shrug these statements off as merely absurd.  But Rep. Broun is a member of the U.S. House Committee on Science, Space, and Technology, and he’s running unopposed for re-election.

Broun is just one of the members of the House Committee on Science, Space, and Technology whose unusual views should give voters pause.  His colleagues on the Committee include Todd “Legitimate Rape” Akin (R-MO), and Randy Neugebauer (R-Texas), whose congressional record is most notable for introducing a resolution that “people in the United states should join together in prayer to humbly seek fair weather conditions.” Read More

What Is a (Big) Bird in the Hand Worth?

by Suzanne M. Rivera, Ph.D.

The Presidential debate on Wednesday was fascinating theater.  Much of the post-debate commentary in social media has focused on Mitt Romney’s threat to cut federal funding to PBS as a way of reducing the deficit (save Big Bird!) and President Obama’s unexpected restraint (including this piece of NSFW satire from The Onion).

I love Big Bird as much as the next child of the Sesame Street generation, but I’m even more concerned about something else.

Very little attention has been paid to the fact that neither candidate said much about science.  Because I was listening closely for any reference to research or innovation in medicine and healthcare, I can tell you this: the closest either candidate came to making a claim in this area was President Obama who (I think) intended to say something like, ‘Cuts to basic science and research would be a mistake.’ Read More

Social Inequality in Clinical Research

by Suzanne M. Rivera, PhD

For a variety of reasons, racial and ethnic minorities in the US do not participate in clinical research in numbers proportionate to their representation in the population.  Although legitimate mistrust by minorities of the healthcare system is one reason, institutional barriers and discrimination also contribute to the problem.  The equitable inclusion of minorities in research is important, both so that they receive an equal share of the benefits of research and to ensure that they do not bear a disproportionate burden.

Under-representation is not just a question of fairness in the distribution of research risks.  It also creates burdens for minorities because it leads to poorer healthcare.  Since participation in clinical trials provides extra consultation, more frequent monitoring, and access to state-of-the-art care, study participation can represent a significant advantage over standard medicine.  To the extent that participation in research may offer direct therapeutic value to study subjects, under-representation of minorities denies them, in a systematic way, the opportunity to benefit medically.

For many years, our system for drug development has operated under the assumption that that we can test materials in one kind of prototypical human body and then extrapolate the data about safety and efficacy to all people.  That’s a mistake; the more we learn about how drugs metabolize differently based on genetics and environmental factors, the more important it becomes to account for sub-group safety and efficacy outcomes.  More recently, greater emphasis has been placed on community-based participatory research.  This movement toward sharing decision-making power between the observer and the observed is a critical step for addressing both the subject and researcher sides of the inequality equation.

Research Exceptionalism Diminishes Individual Autonomy

by Suzanne M. Rivera, Ph.D.

One of the peculiar legacies of unethical human experimentation is an impulse to protect people from perceived research risks, even when that means interfering with the ability of potential participants to exercise their own wills.  Fears about the possibility of exploitation and other harms have resulted in a system of research oversight that in some cases prevents people from even having the option to enroll in certain studies because the research appears inherently risky.

Despite the fact that one of the central (some would say, the most important) principles of ethical human research is “respect for persons,” (shorthand: autonomy), our current regulations– and the institutions that enforce them– paradoxically promote an approach to research gate-keeping which emphasizes the prevention of potential harm at the expense of individual freedom.  As a result, research activities often are treated as perils from which unsuspecting recruits should be shielded, either because the recruits themselves are perceived as too vulnerable to make reasoned choices about participation, or based on the premise that no person of sound mind should want to do whatever is proposed.

One example of such liberty-diminishing overprotection is the notion that study participants should not be paid very much for their time or discomfort because to provide ample compensation might constitute undue inducement. Although there is no explicit regulatory prohibition against compensating research participants for their service, The Common Rule requires researchers to “seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence.”  This has been interpreted by many to mean that payment for study participation cannot be offered in amounts greater than a symbolic thank you gesture and bus fare. Read More

Treatment of Subject Injury: Fair is Fair

By Suzanne M. Rivera, Ph.D.

Of all the protections provided in the Common Rule to safeguard the rights and welfare of research participants, there’s one glaring omission: treatment of study-related injuries.

Our current regulatory apparatus is silent on whether treatment of injuries incurred while participating in a study ought to be the responsibility of the sponsor, the researcher, or the test subjects.  The closest thing to guidance we are given on this topic in the Common Rule is a requirement that, if the study involves more than minimal risk, the informed consent document must provide, “an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained.”

Note, the regulations do not state that plans must be made to provide treatment at no cost to the participants.  In fact, the regulations don’t say treatment needs to be made available at all.  Thus, it is possible to comply with the letter and spirit of the regulations by stating the following in an informed consent document, “There are no plans to provide treatment if you should be injured or become ill as a result of your participation in this study.”  Or even, “The costs of any treatment of an injury or illness resulting from your participation in this study will be your responsibility.” Read More

Research Participation as a Responsibility of Citizenship

by Suzanne M. Rivera, Ph.D.

For legitimate reasons, the human research enterprise frequently is regarded with suspicion.  Despite numerous rules in place to protect research participants’ rights and welfare, there is a perception that research is inherently exploitative and dangerous.

Consequently, most people don’t participate in research.  This is not only a fairness problem (few people undergo risk and inconvenience so many can benefit from the knowledge derived), but also a scientific problem, in that the results of studies based on a relatively homogeneous few may not be representative and applicable to the whole population. Larger numbers of participants would improve statistical significance, allowing us to answer important questions faster and more definitively.  And more heterogeneous subject populations would give us information about variations within and between groups (by age, gender, socio-economic status, ethnicity, etc.).

Put simply, it would be better for everyone if we had a culture that promoted research participation, whether active (like enrolling in a clinical trial) or passive (like allowing one’s data or specimens to be used for future studies), as an honorable duty.    (Of course, this presumes the research is done responsibly and in a manner consistent with ethical and scientific standards, and the law.) Read More