This post is part of a series on emerging research challenges and solutions. The introduction to the series is available here, and all posts in the series are available here.
By John P.A. Ioannidis, MD, DSc, C.F. Rehnborg Chair in Disease Prevention, Professor of Medicine, of Health Research and Policy, of Biomedical Data Science, and of Statistics, and Co-Director, Meta-Research Innovation Center at Stanford (METRICS), Stanford University
Generating reproducible research results is not an easy task. As discussions about a reproducibility crisis become more common and occasionally heated, investigators may feel intimidated or even threatened, caught in the middle of the reproducibility wars. Some feel that the mounting pressure to deliver (both quantity and quality) may be threatening the joy of doing science and even the momentum to explore bold ideas. However, this is a gross misunderstanding. The effort to understand the shortcomings of reproducibility in our work and to find ways to improve our research standards is not some sort of externally imposed police auditing. It is a grassroots movement that stems from scientists themselves who want to improve their work, including its validity, relevance, and utility.
As it has been clarified before, reproducibility of results is just one of many aspects of reproducibility. It is difficult to deal with it in isolation, without also considering reproducibility of methods and reproducibility of inferences. Reproducibility of methods is usually impossible to assess, because unfortunately the triplet of software, script/code, and complete raw data is hardly ever available in a complete functional form. Lack of reproducibility of inferences leads to debates, even when the evidence seems strong and well-rounded. Reproducibility of results, when considered in the context of these other two reproducibility components, is unevenly pursued across disciplines. Some fields like genetic epidemiology have long understood the importance of routinely incorporating replication as a sine qua non in their efforts. Others still consider replication as second-class, “me too” research. Nevertheless, it can be shown (see Ioannidis, Behavioral and Brain Sciences, in press), that in most circumstances replication has at least the same value—and often more value—than original discovery. However, this leads to the question: how do we reward and incentivize investigators to follow a reproducible research path?
You must be logged in to post a comment.