Legal academics who work across disciplines sometimes find themselves in the uncomfortable position of explaining to their stunned colleagues the process by which second- and third-year law students, armed with author c.v.s, decide what gets published and where.
Well, get ready to get your schadenfreude on. For the past 10 months, John Bohannon, a contributing correspondent for Science magazine, has been conducting a sting of (other) science journals and their peer review processes. Much like the famed Sokal hoax, Science submitted to 304 journals a bogus paper written by a fictitious researcher from a nonexistent institution. The paper described “the anticancer properties of a chemical that [the fictitious researcher] had extracted from a lichen,” and according to Bohannon, “[a]ny reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately” and rejected it promptly. And yet, over half of the journals accepted the paper. Recall that the bogus paper purports to report on the discovery of the anticancer properties of lichen. Let the prospect of bogus cancer research published in peer reviewed medical journals sink in. As it happens, all of the journals involved in the sting—both those that offered to publish the bogus paper and those that rejected it—are open access. Bohannon says he wanted to include traditional subscription journals in his sting, but their lengthy review times (occasionally as long as one year) made this infeasible. But that didn’t stop him from framing the results of the sting as indicting open access journals as a category. As many in the science community immediately recognized, however, without a traditional journal control group, we can’t conclude from this sting anything about the open access business model, per se. All we can conclude is that some open access journals—including some hosted by such traditional publishing giants as Elsevier and Sage—fell for the hoax while others—including open access journals that were already fairly highly well regarded, like PLoS One—did not. As Bohannon notes (alas, only in the final two sentences of the article), it is entirely possible—indeed, likely—that a comparable sting of a similarly wide range of traditional journals would yield similar results. (Some have also noted the unseemingliness of Science—the most traditional of traditional journals—setting out to entrap their open access counterparts.) For some righteous push-back from open access proponents and others along these lines, see here, here, here, and here.
Finally, given my IRB obsession, I can’t help but comment on that angle. Science, like most journals, requires that any study involving human subjects have received their informed consent as well as IRB approval. The editors of the targeted journals are pretty clearly unwitting “human subjects,” as federal regulations define that term. An IRB would have had to waive the usual requirement of informed consent, and signed off on the privacy, psycholgical, and financial risks to those editors that agreed to publish the bogus paper. The Science article, after all, names names. Science even published supplementary data containing email correspondence between Bohannon and various editors (only bank account numbers are redacted). And Bohannon reports that at least one publisher has vowed to shutter its offending journal’s doors by the end of the year as a result of the sting. Trust me when I tell you that many IRBs would absolutely worry about these kinds of risks to subjects.
Now, I don’t know that Bohannon in fact omitted to get IRB approval. But there is no mention of such approval that I’ve seen. My guess is that both he and Science regard this as investigative journalism rather than as human subjects research, perhaps because, although Bohannon is a Ph.D.-trained scientist and published the results of his study in one of the premier science journals in the world, he makes his living as a science writer. It’s one of the oddities of our IRB system that what some can do, if at all, only after often-protracted prospective third-party review, others can do at will. Academics are usually based in institutions that have contracted with federal regulators to subject to IRB review all human subjects research conducted the institution’s auspices, while journalists and other writers, for instance, are typically based in no such institutions.
It’s another oddity of the IRB system that all human subjects are created equal: when determining how much risk investigators ethically may ask subjects to face, the federal regulations do not allow IRBs to discriminate between, on one hand, patient-subjects in a cancer trial and, on the other hand, incompetent editors reviewing a bogus paper for a medical journal about lichen’s anticancer properties—or terrorists, or ICU staff who fail to practice good hygiene in caring for their patients.
For all its methodological flaws, the Science sting at a minimum provided more details of what we already knew to be the seedy underbelly of some “peer review” journals. Exposing this underbelly is important. I’m not saying it shouldn’t have been done. To the contrary, my point is that it’s worth pausing to reflect on the fact that similarly critical work often cannot be done by those best trained to do it.
Incidentally, Science‘s peer review sting is part of a special issue, Communication in Science: Pressures and Predators. The whole thing is worth a read.
[Cross-posted at The Faculty Lounge]