By now, most of you have heard about the controversial study that sought to evaluate how much oxygen to give premature newborns to preserve both their lives and their sight. Below, Laura Stark lays out some of the key details about the study and OHRP’s response, and concludes that part of the problem may have been a result of the difficulties associated with approving multi-site research.
Maybe so, but let me offer a more fundamental challenge: perhaps IRBs are just ineffective – or not as effective as we hope they would be – at protecting human subjects. In retrospect, it looks like all 23 IRBs that reviewed the study, all of which were applying the same regulatory standards, failed to do what OHRP, many news outlets, and as awareness grows, much of the public thinks they ought to have done to protect the babies and families involved in this study. How could they all have gotten it wrong? Are the regulations insufficient? Are the procedures insufficient? Is it all just a matter of interpretation?
These questions lead to another fundamental issue: the lack of empirical evidence on IRB effectiveness. We have data on whether IRBs follow the regulations, data on adverse events, data on OHRP warning letters, data on IRB-imposed research burdens and delays – but these all nibble around the edges of the real questions: what are IRBs supposed to be doing, are they doing it well, and how would we know? The counterfactual – a world without IRB review – is pretty tough to study, but I’m working with a group of colleagues at the Petrie-Flom Center and elsewhere to think through some empirical methods to get at precisely these issues. And we’d love to hear your thoughts!
Finally, as a side note, one point that seems to be getting lost in coverage of this preemie story is that although there seems to have been some major problems with the consent process, the study question itself was a very important one to ask.