Changing human-subjects regulations? Tune in now

Today and tomorrow, the National Academy of Sciences is hosting a workshop on revisions to the human-subjects regulations (the “Common Rule”), especially for rules on social and behavioral research. The workshop is being simulcast, and viewers can send in questions. Join us!

The most provocative presentation this morning, from my perch in the front row, was from Brian Mustanski, who studies adolescent health and risk behaviors–especially same-sex experiences. It’s an important topic to study because of the risk of HIV/AIDS transmission, among other things. But it’s tough for investigators to conduct studies on sex because the topic worries Institutional Review Boards (or researchers believe the topic will worry their IRBs). Sociologist Janice Irvine makes a similar argument in her survey of sex researchers.

Do IRBs need to be so worried? Mustanski and his colleagues asked the adolescents that they studied how comfortable the kids felt answering their sex survey. Around 70 percent felt either “comfortable” or “very comfortable” answering the sex questions–the implication being that it was silly for IRBs to think the questions posed more of a minimal risk. But his data also showed that 3 percent of the respondents felt “very uncomfortable.” He did not point out this finding, and so I asked Dr. Richard Campbell, another presenter, to weigh in on whether he would consider 3 percent to constitute a “large” or “likely” risk. Earlier Dr. Campbell had given a conceptual talk arguing that IRBs conflate the magnitude of risk with the likelihood of risk to participants. In answer to my question, Campbell said that making 2-4 precent of adolescents “very uncomfortable” would not constitute a large or likely risk, and so the research should go forward.

I imagine that IRB members of a more conservative bent would disagree–and this is the crux of the problem. In considering how to revise the human-subjects regulations, would it be more helpful to make the regulations more specific, for example by setting quantitative thresholds and standards that everyone would have to follow? Or would it be best to make the regulations more flexible? The regulations already give IRBs more discretion than they use. IRBs don’t use the flexibility in the regulations because they are always concerned about institutional liability. For IRBs, conversations about protecting human subjects from harm is simultaneously a conversation about protecting the institution from legal harm. IRBs would read surveys like Mustanski’s by seeing the few people who are uncomfortable rather than the majority of people who were entirely comfortable. Why? Because it only takes one lawsuit.

Is this regulatory contradiction too big for NAS? The debate in Washington continues.

0 thoughts to “Changing human-subjects regulations? Tune in now”

  1. Interesting, as always, Laura. One quick question and a comment. The question concerns your statement that “IRB members of a more conservative political bent” are likely to think that research that is expected to harm only 3% of participants should not go forward. I’m not sure that political ideology, per se, is much at issue in the IRB wars (although perhaps you meant to draw a connection between sex surveys and social and religious conservatism that doesn’t generalize beyond this and very similar examples), but if forced to predict which such ideology would be less likely to block researchers from inviting participants into a study, it would be anti-paternalist libertarians and anti-government/anti-regulation conservatives, not liberals or progressives. Can you hum a few more bars of what role you see political ideology playing in IRB decisions?

    The comment concerns your general thesis that “IRBs don’t use the flexibility in the regulations because they are always concerned about institutional liability” and that “IRBs would read surveys like Mustanski’s by seeing the few people who are uncomfortable rather than the majority of people who were entirely comfortable. Why? Because it only takes one lawsuit.”

    I don’t doubt that IRBs often worry about institutional liability — and, more broadly, bad publicity — and that some of their risk aversion is driven by those and similar concerns. But I think it’s a leap to assume that all or even most IRB risk aversion is driven by bureaucrats fiercely protecting their institution instead of well-intentioned paternalists fiercely (if, as I believe, often mistakenly) protecting participants.

    IRBs tend not to think that participating in research has benefits for participants, and in any case they officially don’t count any benefits other than direct medical benefits (obviously inapposite in sex surveys). On the other hand, they are trained to conduct searching inquiries into all possible risks that a protocol could conceivably pose to (some) participants, and to view virtually anyone, given the right circumstances, as “vulnerable” and in need of additional protections. They tend to think of their job as protecting subjects *from* research, rather than ensuring access *to* it. They have all kinds of psychological incentives, as do all regulators, to confirm their value by “spotting the issues” (as we say in law school) in the protocol. Under those circumstances, it wouldn’t surprise me in the least if an IRB, presented with data showing that 3% of (similar) participants (in similar studies) will be harmed, would still have qualms about the protocol, or even disapprove it. After all, their only job is to protect participants — *every* participant, not just the ones in the majority.

    As you know, I think there are lots of problems with this way of governing research, but I do think that this kind of “soft group paternalism,” as Frank Miller and Alan Wertheimer broadly call it, is very frequently at issue in IRB work, and explains lots of IRB risk aversion, including that which continues even in the face of data showing that only a minority of participants have “bad” outcomes. Figuring out the root(s) of the problem of poor research governance matters because the problems will drive appropriate solutions. In addition to being skeptical that collecting data about participant outcomes and showing it to IRBs will do much good, I’m skeptical that the political will exists to support changes to the regs that provide “quantitative thresholds and standards.” What would these rules looks like? If only 3% of kids are expected to be harmed, then full speed ahead, because harm to 3% of the sample is acceptable collateral damage? Maybe that’s the right answer, but that’s not how research ethicists and IRB members are trained to think, it’s not the ethical and legal framework that we have, and I think it would take a massive cultural upheaval to change all of this. I elaborate on some of these points here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2218549

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.