Exempt Research & Expedited IRB Review: Curb Your Enthusiasm

By Michelle Meyer

A while back, over at PrawfsBlawg, Martin Pritikin had a useful post collecting advice for legal academics looking to break into increasingly popular empirical legal studies (ELS). As Jeremy Blumenthal notes in the comments, Step 1 is to be sure to get IRB approval. This post addresses what I’ll call, with a nod to Cass Sunstein’s work on Chevron deference, IRB Step Zero: Determine whether your research needs IRB approval at all.

Don’t worry, it’s an easy step: As Jeremy’s plenary admonition to all wannabe ELS scholars implies, the answer is almost certainly Yes. Although the regulations in theory establish three risk-based tiers of review — human subjects research (HSR) otherwise subject to IRB review that the regulations nevertheless exempt; HSR that is eligible for expedited review; and HSR that requires review by a fully convened IRB (everything else) — in practice, the first two tiers tend to collapse into the third. In this sense, and now I borrow from Matthew Stephenson and Adrian Vermeule, IRB review has only one step.

A quick note of clarification: As I’ve noted before (here and here), several projects I have in the works, beginning with Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem, forthcoming next June in the Administrative Law Review, argue that we suboptimally regulate knowledge production. Just to be clear, my argument in that article doesn’t depend on my argument here about the broad scope of the regulations and their failed attempt to achieve risk-based levels of review.* Consider this post a public service for ELS types. That said, I draw here on The Heterogeneity Problem‘s background section, where interested readers will find the relevant citations.

Right. So, IRB Step Zero: Does your research require IRB review?

Whatev. My research isn’t federally funded. (In fact, it isn’t funded at all. Grrr.) IRBs are irrelevant to me.

As Justice Alito memorably mouthed, Not true.

It is true that, aside from research involving drugs, devices and so on that are under the FDA’s direct jurisdiction, by their terms, the relevant federal statute and regulations directly regulate only that HSR that is federally conducted or funded. In practice, however, a web of contractual relationships ensures that most HSR is subject to IRB review regardless of the source of funding, including virtually all HSR conducted by academics and their students.

When a faculty member (or any other institutional affiliate) receives federal research funding from any of more than a dozen federal departments and agencies, her institution contracts with the Office for Human Research Protections (OHRP) promising, in exchange for the funding, to ensure that the researcher submits her protocol for approval before a properly-convened IRB. There has been longstanding appetite in some quarters for directly federally regulating HSR, regardless of the source of funding. OHRP lacks the statutory authority to do that (and perhaps Congress lacks the requisite power as well). So instead, OHRP, in the contract of adhesion it executes with each institution (called a Federalwide Assurance, or FWA (pdf)), invites institutions to voluntarily promise to extend the regulations to all of their HSR, regardless of whether the federal government funds it or not. Virtually academic institutions have at least one federally-funded researcher, and between 75% and 90% of these, in turn, have agreed to extend IRB review to all HSR. (In July 2011, the Office of Science & Technology Policy and HHS issued an ANPRM that, among many other things, would make such extension mandatory.)

Of the minority of institutions that don’t contract with OHRP to extend IRB review, many nevertheless have adopted an institutional policy of extending IRB review to all faculty and/or student HSR. Similarly, many journals require that research submitted for publication have been approved by an IRB.

The bottom line? If a researcher isn’t subject to IRB review directly, through a federal grant or contract, she’ll likely be subject to it indirectly, through her institution’s contract with OHRP; through institutional policy incorporated by reference in her employment contract; or in her publishing contract. Thus is the oral historian, sociologist, anthropologist, memoirist, music theorist, education professor and, yes, legal academic swept down the rabbit hole of regulations.

But I’m not experimenting on people, I’m just talking to them.

You’re not engaged in human vivisection? Congratulations! But thanks to broad regulatory definitions of “research” and “human subject,” talking to people, analyzing existing data, and even observing people in public will often constitution HSR:

  • “Research” is any “systematic investigation . . . designed to develop or contribute to generalizable knowledge.” (If you’d like to argue that you do not intend for your work to so contribute, feel free; it might work, and would be only moderately humiliating.)
  • A “human subject” is any “living individual about whom an investigator (whether professional or student) conducting research obtains (1) Data through intervention or interaction with the individual, or (2) Identifiable private information.
  • “Intervention includes both physical procedures by which data are gathered (for example, venipuncture) and manipulations of the subject or the subject’s environment that are performed for research purposes.”
  • “Interaction includes communication or interpersonal contact between investigator and subject.”
  • “Private information includes information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public (for example, a medical record). Private information must be individually identifiable (i.e., the identity of the subject is or may readily be ascertained by the investigator or associated with the information) in order for obtaining the information to constitute research involving human subjects.”

Fine, my research is HSR. But it’s not risky; surely it’s exempt.

Not so fast. Many law professors — including, perhaps surprisingly, some who specialize in innovation policy or intellectual property, and even some whose empirical work should, under the regulations, be submitted to an IRB — know little or nothing about IRBs and their statutory and regulatory basis. Many others, of course, know quite a lot about IRBs.

And then there are those who know just enough to be dangerous. Such people sometimes suggest that regulatory categories for “exempt” research and research subject to “expedited review” (discussed below) significantly lessen the regulatory burden on relevant research(ers). As the Gershwins said, It ain’t necessarily so.

It’s true that the regulations provide for six categories of exempt research. But, as with most ex ante rules, there are significant limits as to how well-defined these regulatory categories can be. It requires an act of interpretation — of both the regulations and the study at bar — to decide whether a study is exempt. Notice that the regulations are silent about the allocation of this interpretive power. Nearly all institutions, in prudent adherence to agency guidance, require that it not be the researcher who decides that her project is exempt. (And, to be fair, if we trusted researchers’ characterization of their studies, we wouldn’t need IRBs in the first place.) Instead, researchers must submit any study they believe to be exempt from IRB review to . . . (wait for it) . . . the IRB.

At that point, typically, the IRB chair or another IRB member will review the protocol and recruitment plan to determine whether it is in fact exempt. So purportedly exempt studies don’t necessarily undergo full-blown merits review by a fully convened IRB.

But, unless the IRB chair or her designee is really quite certain that the study is exempt, she will either deem it not exempt or send it to the full committee to determine its exempt status where, not infrequently, the exemption inquiry becomes difficult to distinguish from the risk-benefit analysis that takes place during full-blown “merits” review. (Why do IRBs tend to err on the side of more, rather than less, review? Because they’re risk-averse, and because the kinds of costs they are most keen to avoid tend to be those associated with approving research that turns out to be dangerous or embarrassing or that incurs liability for the IRB and the institution, rather than the relatively hidden costs of blocking, substantively altering, or delaying welfare-enhancing research. That’s the subject of its own post.)

Moreover, because the regulations constitute a floor, not a ceiling, even if an IRB determines that a protocol is exempt, it isn’t required to refrain from reviewing it. IRBs may — and regularly do — subject what are more accurately called exemptible proposals to expedited or even full IRB review. One study, for instance, found that 15% of the research proposals that had been reviewed by surveyed IRBs were exemptible, and that fewer than half of responding IRBs regularly exempted such exemptible research as analysis of existing data, interviews, and surveys. Indeed, some IRBs, by policy, simply subject all protocols to full review. As one commentator, himself an IRB member, put it: “There is no great gain in seeking [exempt] status.”

Well, even if the IRB reviews my work on the merits, it’ll expedite that review. No biggie.

Again, not so fast. Research qualifies for “expedited” IRB review if it is “minimal risk” and involves only one or more of 10 activities listed in the Federal Register (a list not updated since 1998). Under expedited review, the IRB chair or her designee can review the research proposal alone, which is often, but not always, faster than full review.

But, as with exemptible research, it is the IRB that determines both whether proposed research falls within an expeditable category and whether it involves “no more than minimal risk,” that is, whether “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” IRBs are notoriously variable in their assessment of whether identical research procedures constitute minimal risk. (The Heterogeneity Problem discusses much of this empirical research on IRB variation.)

Moreover, as with exemptible research, IRBs “may,” but need not, expedite review of expeditable research. Given IRB risk-aversion, much expeditable research, like much exemptible research, in practice receives full IRB review. One study found, for instance, that of those high-volume IRBs surveyed, only 52% regularly conducted expedited review of studies involving a simple blood draw, and only 60% did so for studies involving non-invasive data collection from adults.

So, to return to IRB Step Zero: Does your study need IRB review? If it involves human beings and you hope that others might learn from its results, the answer is almost certainly Yes. Get thee to an IRB.

* What I call the (participant) heterogeneity problem in research risks and benefits would persist regardless of how much or little research the regulations govern (that is, regardless of how heterogeneous is the research subject to IRB review). In fact, I argue that intractable participant heterogeneity — combined with extremely sticky, if not quite intractable, IRB risk aversion — seriously frustrates attempts at risk-based research regulation. In The Heterogeneity Problem, I emphasize regulatory breadth not as a problem in itself (the article is agnostic about that), but in order to demonstrate the importance of IRBs to knowledge production and, secondarily, to help lay the foundation for skepticism about IRB expertise vis-à-vis this vast range of studies.

[Cross-posted from The Faculty Lounge.]

16 thoughts to “Exempt Research & Expedited IRB Review: Curb Your Enthusiasm”

  1. Decent enough review of some of the issues, but there seems to be some traditional splitting of the IRB/regualtory argument. Might not hurt to update the discussion a little.

    1. So, having asked Michael offline to elaborate a bit more on his comment, let me first try to restate his comment, and then respond to it. As I understand it, although Michael has had his fair share of run-ins with IRBs, he believes that they can be “humanistic and complementary to the research process as opposed to simply regulatory,” or “invested collaborators as opposed to police-state monitors,” and he wishes that discussions of IRB review, the one here included, would reflect that. (Michael, I hope you’ll weigh back in if this isn’t a fair characterization of your comment.)

      I agree that IRBs can be complementary to researchers, and that some of them are, at least some of the time. But if much of the time IRB review feels adversarial, there’s a reason for that: By law, IRBs have a singular mandate (and thus are somewhat of a throw back to the administrative state of the 1970s from whence they arose): to protect the rights and welfare of research subjects. Subjects, and not researchers or society, are IRBs’ clients — at least when IRBs operate as the law intended. This legal reality is neither good nor bad in and of itself; “regulation” isn’t, for me, a dirty word, and so depicting IRBs as regulatory isn’t intended as a slur.

      The interesting question, in my view, is what we’re trying to maximize through research regulation, and whether we’ve designed a system that is likely to achieve this end. Many critics of IRBs argue that IRBs, like agencies, tend to become captured by the parties they’re supposed to regulate: the institution and its affiliate researchers. Such critics argue that IRBs often care more about protecting the institution (and, increasingly, themselves) from liability (or mere embarrassment) than about protecting research subjects. There’s no question that some of this occurs. Some IRB decisions simply can’t be explained in any other way (the example that comes to mind is the IRB that rejected a study out of concern that the researchers were violating the IP rights of other researchers by proposing to use a particular measurement instrument — a feature of the protocol that posed no risk whatever to subjects but might have been a headache for the institution). While I do see the IRB-researcher relationship as often adversarial (on both sides), if it helps, I have a more charitable explanation of this (and of IRB risk aversion) than the standard “agency capture” argument. (I develop this argument in Research Contracts, a companion article to The Heterogeneity Problem. In my experience, IRBs members, especially those who self-select into the relatively thankless task of IRB service, generally care deeply about research subject welfare and have a strong sense that they may be the only thing standing between harm to them and an overzealous investigator. The problem, in my view, is that they don’t — and can’t — know what risk-benefit decision will further rather than set back the interests of prospective subjects. So they have to speculate, and when they do, all manner of biases enter into the decision-making process. Among them is a kind of “double risk aversion” according to which people who are asked to make welfare decisions for third parties are significantly more risk verse than when they make the same decisions for themselves. But that will have to wait for a later post.

  2. Great post Michelle! Curious whether you think law school ELS types are better of being reviewed by a single university-wide IRB or whether you’d advise them to consider pushing their dean to start a law school specific IRB (or together with other “low risk” departments”? Do you think some of the creep is “bleed” from the medical to the behavioral? Also are there any studies on heterogeneity of expedited review procedures specifically among IRBs?
    Cross-Posted at Faculty Lounge

    1. Hi Glenn. There is, of course, lots that ails the IRB system, and so I say let a thousand flowers bloom as we try various ways of redressing these flaws. That said, I don’t think that non-biomedical IRBs are a panacea for much of what ails the IRB system. And so, given limited human resources, I’d rather see ELS types (and their colleagues) train their legal expertise on questions about the problems we’re trying to solve with the IRB system and how to optimize that regulatory system to solve those problems.

      The regs were clearly designed by people whose primary work was in the area of biomedicine, and the regs reflect that origin in various ways. But short of amending the regulations to scale back that creep, there’s not much that can be done about what the regs call for across disciplines and methodologies.
      No doubt, non-biomedical research is sometimes unduly delayed or restricted because, for instance, biomedical ethicists and researchers fail to understand such a study’s somewhat foreign aims or methodologies and apply the regs accordingly. But I’m skeptical that when IRBs delay or restrict social science or humanities research, they do so primarily because they’re reviewing non-biomedical as opposed to biomedical research. Many institutions (e.g., major research universities) already have an IRB dedicated to non-biomedical research, and my sense (again: we lack good data; see responses to other comments) is that these haven’t been a panacea at those institutions. I’d be happy and interested to hear otherwise from researchers.

      Certainly, plenty of anecdotes exist in which an IRB shuts down social science research, and it turns out to have been a fellow social scientist on the IRB who hammered the final nail into the study’s coffin. Intradiscipline disputes about “proper” methodologies and sufficiently “important” research questions (not to mention turf wars and the usual petty academic politics) exist, and perhaps continue to grow with greater innovation in research methodologies, and IRB decisions can and do reflect these disputes, biases and limited understandings.

      Conversely, although I’m skeptical of the popular view that biomedical research is riskier than non-biomedical research (such that we should restrict IRB review to biomedical research), to the extent that this is true (or simply believed to be true by IRBs), one can imagine biomedically oriented IRBs viewing the social science research they review as comparatively innocuous. If so, then biomedically-oriented IRBs ought to work in favor of social scientists hoping to get their work through relatively quickly and with few or no required substantive alterations.

      Rather, I think that some of the biggest and most fundamental problems with the IRB system run across pretty much all types of research and all types of reviewers. The regs are necessarily open to interpretation, and IRBs tend to interpret both them and the studies they review in ways that involve more rather than less review. IRBs do so for a variety of reasons, including some I alluded to in my response to Paul Reitemeier. In addition to risk aversion (and here again, there are in turn multiple reasons for IRB risk aversion), IRBs, being comprised of human beings who have a natural desire to feel needed and useful, tend to want to spot at least some issues in a protocol (when was the last time a colleague asked you to provide feedback on a paper and you reported that it was just perfect as-is?). That very human tendency will persist even if it is fellow social scientists who are reviewing social science studies (indeed, the tendency may be stronger, as they are now “uber-experts” reviewing studies in their home discipline or field).

      (On empirical studies of how IRBs deal with expeditable research, see my responses to Chris and Norm.)

  3. You state that “between 75% and 90% of these [academic institutions], in turn, have agreed to extend IRB review to all HSR”. This claim is not supported by empircal evidence as reported by AAHRPP, the only accrediting body for institutions with established in-house IRB operations.
    In a March 2012 report, AAHRPP reported that among accredited institutions (presumably those with the highest standards of IRB practices), “In 2011, only 29 percent of organizations checked both boxes on their FWA. This represents a decrease between 11 and 15 percent from 2009 and 2010, respectively. The percent of organizations that did not check both boxes rose only slightly to 53 percent in 2011.”
    Both boxes refers to Subpart A (Common Rule) and subparts B, C, and D. The fact that 53% did not check either box indicates they do not extend their protections to all research, only to HHS or other federally funded research as required by the funding agency. That is not to say that *in practice* the IRBs do not apply the protections in the regulations, only that they do not assure the federal government that they will always do so. And without that written assurance, they are free to decline to do so.

    1. Thanks for weighing in, Paul; it’s always nice to have an IRB chair take part in the conversation.

      The 75-90% figures I cite come from federal regulators themselves, usually by way of peer-reviewed journals (and again, you can see these and other citations in my article):

      (1) Carol Weil, Lisa Rooney, Patrick McNeilly, Karena Cooper, Kristina Borror & Paul Andreason, OHRP Compliance Oversight Letters: An Update, 32 IRB 1 (2010). The authors — from OHRP, AHRQ, FDA and Walter Reed, respectively — reported the FWA status of 146 institutions for which OHRP had issued a compliance determination letter b/w 2002 and 2007. They then compared the result to a similar report of 155 institutions that had received letters between 1998 and 2002. They found that over 90% had agreed to extend the regulations in the 1998-2002 sample, compared to 74% in the 2002-2007 sample.

      (2) American Assoc. of Univ. Profs. (AAUP), Protecting Human Subjects: Institutional Review Boards and Social Science Research p. 5 (2001). The authors cite personal communication from Thomas Puglisi, Director, Division of Human Subject Protections, OPRR, DHHS, for the proposition that “Approximately 75 percent of the largest American research institutions, which for the most part are research universities or hospital affiliates of universities, have voluntarily extended the IRB review system to all human-subject research.”

      I don’t cite the following in the article, but see also:

      (3) In its 2006 report, AAUP stated that “most academic institutions have adopted the same protection for subjects of research that is not federally funded as for subjects of federally funded research, that is, they require advance approval of the research by an IRB.” It based this informed on the results of a FOIA request to OHRP for a list of all U.S. colleges and universities with an FWA that had not checked the box. The total was 165. They don’t say what the denominator was, and alas, there’s no easy way through OHRP’s online FWA database to determine the total number of colleges and universities with FWAs on file. But I’d guess that 165 non-extending institutions is a lot closer to 10 or 25% than to 53% of the total number of FWA-holding academic institutions.

      (4) In April of 2010, Zach Schrag made the same FOIA request and reported that 207 colleges and universities had declined to uncheck the box (thus, consistent with other reports, a slight net trend toward unchecking the box — although “[o]nly 60 institutions appear on both the 2006 and 2010 lists. One hundred and two had unchecked boxes in 2006 but not 2010, while 147 unchecked their boxes between 2006 and 2010.”). See https://www.institutionalreviewblog.com/2010/08/more-universities-uncheck-their-boxes.html.

      So, what to make of the very different results reflected in the AAHRPP newsletter you cite (I assume you refer to https://www.aahrpp.org/connect/whats-new/advance-newsletter/advance/2012/03/29/checking-the-boxes-on-the-fwa-current-trends)? Well, that same report states: “Extension of some or all of the HHS regulations to other research is less likely among AAHRPP-accredited organizations.”

      So institutions with AAHRPP-accredited IRBs apparently aren’t representative of FWA-holding academic institutions in general. Perhaps that’s because while AAHRPP permits institutions to uncheck the box without losing accreditation, it requires its institutions to have “equivalent protections” for non-federally funded research. Perhaps the combination of these “equivalent protections” (whatever that means) and the accreditation itself make institutions less anxious about incurring liability through declining to check the box. (I also see no reason to assume that AAHRPP-accredited IRBs are “presumably those with the highest standards of IRB practices,” but that will have to be a conversation for another day.)

      In any event, there’s a far more basic response to your comment. You say, after rightly acknowledging that even institutions that do not check the box may in practice require IRB review of all research, that “without that written assurance [the FWA], they are free to decline to [extend IRB review to non-federally funded research].” Well, sure. In fact, they’re free to decline to extend IRB review when they sign the FWA. Yet, one way or another, the vast majority don’t. That is, *all* the empirical data suggests that the vast majority of academic institutions extend IRB review, either via the FWA or through their own policy. As your own source concludes, “among the organizations that did not check one or both boxes, 28 percent had written policies and procedures addressing equivalent protections for non-DHHS-sponsored research. . . . The remaining 72 percent of organizations that did not check one or both boxes applied the DHHS regulations to all research regardless of funding source.

      So you and I, citing different empirical sources, which are in turn based on different samples of institutions, can quibble about how many institutions extend IRB review via the FWA and how many do so through their own policy. In the latter case, true, the federal government lacks jurisdiction. But in both cases, non-federally funded researchers are subject to IRB review, which was the point I was trying to make in the post.

  4. On ‘unchecking the box,” we (Duke Medicine) unchecked it solely because it makes our non-fed sponsored research off limits to OHRP audits. We apply the regs (including HIPAA) to all our medical center based research.

    Duke has a separately constituted IRB for campus based research. In the medical center, we are pretty good about expediting protocols that are eligible for it (BTW, you didn’t mention that ‘expedited’ does not mean ‘quick;’ many of our expedited reviews take longer to get to approval than full board ones.).

    1. Thanks for sharing Duke Med’s experience. On the time that expedited review takes, I did note in the post that expedited review is “often, but not always, faster than full review.” But I confess that the possibility that expedited review could take longer than full board review didn’t occur to me. Is that because the study first goes through expedited review by a single reviewer, who then decides that it needs full board review after all, at which point the study has to get in line behind all the studies that went directly into the full IRB review queue? If so (and assuming this is sufficiently transparent to researchers), this would seem to invite interesting gambling on the part of researchers who believe that they have an expeditable study but don’t want to risk further delays in case the IRB members(s) in charge of making that determination disagree.

  5. You suggest that there are two very different heterogeneity problems — subject heterogeneity and IRB heterogeneity. When I moved from Harvard to UAZ I was struck by the very different policies in place at each, and am now completing a study of the top 50 universities and seeing even more heterogeneity. Has there been any serious effort to create a set of model rules for IRBs to simply adopt, similar to the ABA or AMA model rules for their professionals?

    1. Well, AAHRPP has a set of standards for the IRBs it accredits. And there are various efforts afoot to streamline IRB review of multisite studies. For instance, since research-related impediments to addressing health disparities is on my mind, the National Institute on Minority Health and Health Disparities’ Research Centers in Minority Institutions has identified multiple IRB review as “major impediment to the timely and effective conduct of such research,” and is developing a “community-partnered approach to streamlining IRB review across its consortium of 18 RCMI grantee institutions that will ensure compliance while enhancing the quality of health disparities research.” But I suspect that such efforts will be more feasible in specific kinds of research (e.g., health disparities). The Common Rule (coupled with OHRP and FDA guidance) are, of course, the primary set of model rules for IRBs across the board, and I’m skeptical that we can get much more specific than them without sacrificing IRBs’ ability to respond flexibly to different studies and different local contexts, given the heterogeneity of research (to add a third heterogeneity along with participants and IRBs). (That’s not to say, of course, that we can’t improve on the regulations.) I’ll of course be very interested to see the results of your research; we certainly need more data about what IRBs are actually doing.

  6. Thanks for this post, and for the elaborations in the comments. My sense is that the proportion of unchecked boxes is much higher among universities that do a lot of research and among universities with law schools than it is among all academic institutions, but I agree that there’s not too much point quibbling when most researchers face the same rules.

    I am curious about your claim that “typically, the IRB chair or another IRB member will review the protocol and recruitment plan to determine whether it is in fact exempt.” My impression is that because neither regulations nor OHRP guidance require exemption determination to be done by an IRB member, many larger universities delegate this work to human protections staff. But I don’t recall any studies on this.

    1. Hi Zach. Very interesting observation about a possible correlation between institutions that have unchecked the box and those with law schools (though I wonder if the correlation would hold if we controlled for whether the institution is also a major research producer). Any speculation about causation?

      I have a similar sense that at major research institutions with large IRB staffs of Certified IRB Professionals (CIPs) (https://www.primr.org/certification.aspx?id=206) and others, it is indeed often such a staffer, rather than a voting IRB member, who makes the exemption determination (as compared to expedited review determinations, which the regs required be made by one or more IRB members). So I suppose it would be more accurate to have said “the IRB chair or her designee,” where designee might include another IRB member or a staffer. Like you, however, I merely have this sense from reading a lot of IRB handbooks and other literature, rather than from any systematic study of the policies in place at such institutions.

  7. Michelle, I was wondering if you have a source for one of your statements:

    “One study, for instance, found that 15% of the research proposals that had been reviewed by surveyed IRBs were exemptible, and that fewer than half of responding IRBs regularly exempted such exemptible research as analysis of existing data, interviews, and surveys. Indeed, some IRBs, by policy, simply subject all protocols to full review.”

    I am dong my dissertation on this very topic, and have not found much empricial evidence on the number of institutions that use or do not use the exemptions and expedited review, outside a report by Bell and Associates (for the Feds) in the 1990s.

    1. Norm, you’re right that there is relatively little data about the decisions that IRBs actually make, and how they come to make them. In this regard, IRBs are a bit like the proverbial black box of the jury.

      The stat you reference indeed comes from the so-called Bell Report, which is widely regarded as the best data we have about many aspects of the IRB system, despite its being out of date. Commissioned by the NIH, as you know, the Bell Report was a nationally representative sample of more than 2,000 human subjects researchers and IRB chairpersons, members, administrators and institution officials associated with 491 minimally active IRBs operating in 1995. (For other interested readers, the cite is: JAMES BELL ET AL., FINAL REPORT: EVALUATION OF NIH IMPLEMENTATION OF SECTION 491 OF THE PUBLIC HEALTH SERVICE ACT, MANDATING A PROGRAM OF PROTECTION FOR RESEARCH SUBJECTS 28–30 (1998).) Many academic institutions provide something like an IRB handbook or set of policies online for use by their research faculty; you can learn something about expedited and exempt procedures this way, though the data is incomplete, not always clearly up to date, and slow going, to say the least. Good luck!

  8. IRB’s are comprised of normatively guided human actors, so they vary somewhat across space, time, and incumbencies, not to mention changes in regulatory and local institutional environments. 45CFR46 and OHRP guidance seem to me the closest thing we have to widely accessed models of practice.

    One size fits all protocols (= the chair reviews everything, and decides which ones go to convened board) is how my local IRB operated when I arrived as an administrator. Even exempt protocols were told to come back in a year to renew their exemption. After several years of increased case flow, the retirement of said (very dedicated and wise) chair, and some trial, error, and tribulation, ours now works this way: One staff analyst triages and comments on every application, and each then goes either to (a) a second staffer for review, if screened as exempt; (b) one board member, by expertise or in rotation, if screened as expedited; or (c) the chair, to confirm agenda placement, if screened as full board. Protocols sometimes get retracked by the second reviewer, after some consultation. Determination as exempt means never having to say “here’s my renewal application,” as long as the main study parameters don’t change much, per the determination letter.

    Our boxes are checked, but I am reconsidering–mainly, though, as a matter of workload. About half our protocols are exempt, a third expedited.

    I wonder whether biomedical vs. behavioral focus (ours is the latter) makes a difference. In the olden days (1970’s), I experienced biomedical boards that had trouble approving even quite benign behavioral protocols. But with strictly behavioral boards, to mash up Emile Durkheim: in a community of saints, even the teeniest error could go to Full Board for review.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.