By CK Gunsalus, Director, National Center for Professional and Research Ethics (NCPRE), University of Illinois Urbana-Champaign
We know what it takes for institutions and scholars to produce high-quality, high-integrity research, and yet we do not always act upon that knowledge. As far back as 1988, Paul J. Friedman described both the roots of systemic shortcoming and approaches for conducting trustworthy research. Despite a clear understanding of the issues and steps that would improve our research and educational environments, the academy continues to be dogged by those same systemic issues. A recent National Academies of Sciences, Engineering and Medicine consensus study, Fostering Integrity in Research, in which I participated as a panel member, explores that same disconnect and makes recommendations. The bottom line is this: we must shift our attention and energy away from individual bad actors—though they exist and must be addressed—and toward the highly complex ecosystem within which research is conducted.
An update of an earlier appraisal published 1992, the 2017 NASEM report describes the transformation of research through advances in technology, globalization, increased interdisciplinarity, growing competition, and multiplying policy applications. It identifies six core values underlying research integrity—objectivity, openness, accountability, honesty, fairness and stewardship—and outlines best practices, including checklists, for all aspects of the research enterprise. I encourage you to read it and use these tools in your own work.
All the reports in the world won’t improve research integrity, however, if we don’t do the work in our institutions, departments, and research groups. There are many components to this effort, some of which are discussed in separate posts by my colleagues John P.A. Ioannidis and Barbara A. Spellman elsewhere in this symposium. Let’s focus here on institutional infrastructure.
First, institutions must grapple with core human issues—the fact that even when following best practices, having the best intentions, and working very hard, research collaboration and replication are difficult and challenging endeavors. People are complicated: we have career pressures, are susceptible to multiple cognitive biases, face misaligned incentives, and harbor an all-too-human tendency for self-deception. Add the complex system we work within, and it’s no wonder we struggle. Sure, there are “bad apples,” and we must deal with them; it’s past time, though, to think about the larger environment, the incentives, pressures, and messages we send—and the conduct we reward or overlook.
In writing the NASEM report, we moved away from the past usage of “questionable research practices,” instead describing acts that were previously regarded as being in the gray zone as clearly “detrimental research practices.” We did so because misleading statistical analysis, awarding authorship to researchers who don’t deserve it (and vice versa), not sharing data or code, and not properly supervising research are damaging to the integrity of science. They should be called what they are.
We must also consider how our academic system—the “barrel” in which all of us apples operate—shapes our perceptions and choices. We know people are influenced by the actions of those around them and the environments in which they work. As Dan Ariely notes, “The amount of cheating in which human beings are willing to engage depends on the structure of our daily environment.”
Our research ecosystem misaligns significant incentives—Steven Kerr’s On the Folly of Rewarding A, While Hoping for B describes how “reward systems are fouled up in that the types of behavior rewarded are those which the rewarder is trying to discourage, while the behavior desired is not being rewarded at all.” A 2014 article by Bruce Alberts lays out elements, including hyper-competition, that are damaging our overall goals and mission.
We know that much of what is provided as responsible conduct of research (RCR) training is ineffective: in too many places it is low priority, inadequately funded, compliance-focused, poorly timed, and not assessed. We also know what more effective education looks like. Yet too often, institutions provide uneven mentoring, suppress concerns, or retaliate against those who speak out about faulty systems or ineffective institutional controls.
These problems, dilemmas, bad habits—choose your term—are especially troublesome because our institutions educate the next generations of researchers. Students start their careers in the lab, see the way things are done, and can develop a flawed mental model of research conduct. Because of the structure of the academic enterprise, students are dependent on advisors for funding and advancement; when local practices deviate from formal RCR training, guess which one the students follow?
Leaders are responsible for the integrity of the enterprises they lead. Their words and, more importantly, their actions, shape the environment. And by leaders, I don’t mean only those who hold titles like provost, chancellor, vice president, or dean—I mean all of us who have the power and the platform to influence our institutions and their members. We must hold ourselves accountable for the context we create.
So, how do we work together, effectively, to address these really hard problems? I suggest three major strategies:
Measurement. We must guard against our human tendency toward self-deception—and capitalize on the value of empirical data and competitive instincts—through assessment and benchmarking. At our center, the National Center for Professional and Research Ethics (NCPRE), we provide an online set of administration and analysis tools for the Survey of Organizational Research Climate (SOURCE), developed by Brian Martinson and Carol Thrush, the only evaluation tool we know of that’s statistically validated with a large sample. Based on the established correlation between individual choices and organizational institutional outcomes, it assesses organizational research environment climates for integrity.
Education. Setting the right tone requires attention to everyday behaviors; one-size-fits-all, multiple-choice compliance training is not education. Meaningful and effective education must incorporate information about conducting research with integrity and strategies to address and solve real-world research challenges. Intentional education in the conduct of responsible research should incorporate professional skills as well—how to present research, support diversity, foster good laboratory practices, maneuver in the trenches for credit, navigate complex human interactions and difficult conversations, choose mentors and colleagues with good character, and get useful advice when encountering problems.
Institutional stewardship. We must combine awareness of the human elements that can affect integrity with an examination of factors that can undermine institutional integrity: conflicts of interest; the short half-life of institutional memory about how to respond to allegations of misconduct; the mismatch of research needs and academic comfort zones; etc. Among the strategies for strong stewardship are peer review of investigation misconduct reports and tools like the videos, checklists, guides, and tips developed by NCPRE and others.
Human failings and institutional weaknesses can combine to blind us to what’s really important—the integrity of the scientific enterprise, the development our students, the other scholars who depend on our findings for their research, and the patients who are counting on us. It’s my hope we can use the approaches I’ve described here to make sure we’re always asking the right questions: not “how will this hurt our reputation if it comes out?” or “how do we make this go away?” but rather, “do we want our reputation associated with dishonest work?” and “what kind of education are students getting at our institution?”