Working in private, the National Academy of Sciences’ panel on human-subjects regulations in social-behavioral sciences met this weekend to draft a final report. On Friday, the panel had wrapped up its public “Workshop on Proposed Revisions to the Common Rule in Relation to Behavioral and Social Sciences.” The workshop aimed to critique OHRP’s proposed revisions to federal human-subjects regulations (known as the Common Rule), rather to critique the regulations directly.
Here is what the National Academy panel members said they took to be a few take-points from the public workshop, which I attended:
- LOW-RISK: It’s essential to change regulations for lower-risk research, but the ANPRM does not currently set out a good way to do this. Few participants seemed keen on the new category of “excused,” nor did they like the current use of “exempt.” The key question to my mind is, How much autonomy, do the panelists think, should be handed over to scholar-investigators and taken away from IRBs? Speaker Lois Brako advocated requiring everyone to register their studies with their institutions. Other speakers (Brian Muskanski, Rena Lederman) suggested researchers should be given leeway to interpret abstract terms like “risk” and key moments such as when a study begins. Do panelists agree that scholar-investigators are trustworthy and knowledgeable enough to interpret regulations?
- INTERNATIONAL: The Common Rule gives little attention to research outside the USA, and OHRP’s proposed revisions do not address this dangerous and retrograde gap. Pearl O’Rouke of Partners Healthcare and Thomas Coates of UCLA usefully emphasized this important point and showed the stakes. To my mind, the question for many researchers will be, How should cross-national differences—in institutions’ resources, in study populations—be taken into account in the regulations? Medical anthropologists, for example, are in the midst of a raging debate over this issue. The traditional view has been that we should respect local differences, and this was the original point of requiring IRBs to account for “community attitudes,” which has morphed into a big problem for multisite studies in the present day. The avant garde in medical anthropology suggests that such “ethical variability” is not just inhumane, but it indulges a western insistence on treating some people as “others” rather than as us—whether in the USA or abroad—which happens to be very convenient for drug developers. In my own research, IRB members also faced the more routine question of whether “community” meant a study population, local residents of a region, or something else altogether. The panel may not have time to consider whether it makes sense to clarify what “community” means and, more broadly, who gets to speak on behalf of a “community” regarding its attitudes.
- PRIVACY: We have to come up with a system for reviewing social-behavioral research that is either more flexible or more refined. There is a wide range of appropriate protections, but they can quickly seem inappropriate if applied to some studies. Comparing a few of the presentations makes this point. George Alter explained the rigorous and necessary privacy protection plan for the big data sets and collaborative networks involved in University of Michigan’s ICPSR. On the flip side, Brian Mustanski and Rena Lederman explained the overweening attention to the so-called risks in their studies that involve first-hand interviews and observations.
- EVIDENCE: We need more data on IRB outcomes. It is apparent that the data exist—as talks such as Lois Brako’s showed, in which she documented her team’s impressive overhaul of the IRB at University of Michigan, dysfunctional only a few years ago. The data need to be expanded, analyzed and shared—and supported for the long term. Who will have the money or time for that? That remains to be seen, but either way I will be curious to see the effects of the workshop buzz word: “evidence-based” decision-making. Although panelists saw value in case studies, it would be easiest for them and for policymakers to prioritize problems that can be documented with statistics rather than stories. I wonder, How might this skew the problems that are identified the people included in discussions?
Equally interesting was what was left formally unsaid at the workshop.
- The elephant not in the room was the Office of Human Research Protections. This federal agency published the ANPRM in July 2011 and will be revising the Common Rule.
- Although the panel assured participants that it is writing a report only at this stage, there was optimistic buzz around the rotunda that the panel would also write a consensus statement.
The workshop was fascinating, and to my mind one of the biggest challenges to revising human-subjects regulations can be seen in the name of the workshop itself: the category of the “behavioral and social sciences” is no longer appropriate for thinking through the problems facing members of these fields. There is not a one-to-one correlation between a discipline (psychology, anthropology) and a research method (digital data analysis, interviews, biological sample collection). Regulations should be organized by research method, not discipline. The very concept of the “social-behavioral” research obscures the problem that the National Academies–and many other groups–are trying to overcome.
Stay tuned for the panel’s final report.