Earlier this month, the American Association of University Professors (AAUP) recommended that researchers should be trusted with the ability to decide whether individual studies involving human subjects should be exempt from regulation. The AAUP’s report, which was prepared by a subcommittee of the Association’s Committee on Academic Freedom and Tenure, proposes that minimal risk research should be exempt from the human research protection regulations and that faculty ought to be given the ability to determine when such an exemption may apply to their own projects.
Specifically, the report states, “Research on autonomous adults should be exempt from IRB approval straightforwardly exempt, with no provisos and no requirement of IRB approval of the exemption) if its methodology either (a) imposes no more than minimal risk of harm on its subjects, or (b) consists entirely in speech or writing, freely engaged in, between subject and researcher.”
These recommendations, designed to address long-standing concerns by social scientists about bureaucratic intrusions into their work, are misguided and could result in real harm to research subjects.
Since the current human research protection system was promulgated in 1981, researchers have bristled at the power granted to Institutional Review Boards, the ethics committees charged with reviewing proposed human experiments and serving as gatekeepers to prevent unjustifiably risky research from being performed. There is widespread agreement among researchers and regulators alike that the current oversight structure needs refinement. For this reason, the Office of Human Research Protections issued an Advance Notice of Proposed Rulemaking (ANPRM) to solicit feedback about ways to improve the system. There is great anticipation of changes to come.
The AAUP report seizes upon this move by OHRP as evidence that the current system is irrevocably broken. Much of the rationale in the report rests on the notion that the large number of comments from the public in response to the ANPRM is evidence of a system beyond repair. However, placing responsibility for exemption determinations entirely in the hands of researchers is a bad fix. Faculty researchers are notoriously poor judges of the risks posed by their studies. Ask anyone experienced in the IRB “intake” process about how often studies posing more than minimal risk are submitted for verification of “exempt” status and I can bet the number will not be insubstantial.
There are two inherent biases that contribute to this problem. The first is familiarity bias (Heath and Tversky 1991), which results in people underestimating the risks of techniques or procedures with which they are familiar. You can see this play out in a real-live IRB meeting when oncologists assert chemotherapy is not very risky but become quite animated about the potential harm that could come to a subject for completing a social science survey about their job satisfaction.
The second is self-interest bias (Darke and Chaiken 2005). Given all the pressures on a typical faculty member to bring in grants, publish papers, serve on committees, teach classes and advise students (quite often in that order), it would be completely reasonable to expect that, when given the opportunity to self-exempt and thereby avoid a system that feels like a big bureaucratic hassle, there would be some pressure (perhaps unconscious) to lean toward exemption. But this would put subjects at risk.
IRBs exist for a reason. The imperfect system of oversight we have now is a result of numerous research tragedies caused by otherwise well-meaning scientists who, without regulatory guidance or oversight, made poor ethical decisions. Studies of syphilis in uneducated sharecroppers in Tuskegee, studies that explored the influence of authority on the human capacity for cruelty in Palo Alto and New Haven, studies about sexual behavior in public restrooms in St. Louis—all of these were performed before the federal requirement for IRB review of human research studies. All have come under fire for violating the fundamental ethical principles of respect for persons, beneficence or justice.
The fact of IRB oversight is not the problem with our current system. A fresh look at research projects by people who are not personally invested in the question under study is a good and helpful way to protect the subjects. What makes researchers angry is that IRBs occasionally come to poor decisions or apply the regulations in an overly rigid way. That’s a fair critique and one that OHRP is trying to address by updating the regulations and encouraging better education of IRB members and researchers. But claiming IRBs infringe on academic freedom is a red herring. Freedom from what? IRBs are not administrative apparatus charged with obstructing faculty but rather peer review committees, comprised mostly of other scientists.
It’s not a good idea to categorically remove a portion of human research from the oversight system. What researchers need is for IRBs to be well-supported, well-trained, and expert at applying ethical principles that have stood the test of time. This includes knowing when to take full advantage of flexibilities that already are contained in the regulations, which allow lower risk studies to get a green light with less hoop jumping.
It is quite reasonable for busy faculty members to want relief from unnecessary bureaucratic hassles. But real protections are needed so that research subjects will not inadvertently be harmed. IRB oversight—even if it is just a simple registration process for low-risk studies—is a good thing. Self-exemption by researchers would increase the likelihood that a truly risky study may get done without proper consideration. Improve IRBs—that’s a worthy goal. But let’s not throw the ethics baby out with the bureaucracy bathwater.