When it comes to research with human subjects, about 60 percent of faculty members and 50 percent of graduate students learned about ethics through online or print resources according to a recent survey. These data could be seen as good or bad news—depending on how you feel about getting your ethics through online training modules, such as CITI. These stats—and many more measures of ethics—are included in a remarkable new data set collected and made publicly available by the Council on Graduate Schools.
The data set is a great resource. Anyone with a browser can build custom tables that include different variables and topics related to “research integrity.” Users can slice data by fields of training (life sciences, social sciences, etc.) and by rank of researcher (faculty members, postdocs and graduate students).
Here is the punch line on human-subjects training—and a few questions about the data (the CGS has covered questions about methodology covered):
Hundreds of program directors at seven American universities were asked how their faculty members, graduate students and postdocs learned about research ethics:
GRADUATE STUDENTS
General Topics |
Advisor |
Course |
Workshops |
Online/Print |
None |
N/A |
Human Subjects |
58% |
39% |
23% |
50% |
0% |
18% |
FACULTY MEMBERS
General Topics |
Independent Research |
Workshops |
Online/Print |
None |
N/A |
Human Subjects |
38% |
34% |
59% |
2% |
22% |
Aggregate data like these are commonplace, important—and worth reading critically:
1) It could be easy to assume that grad students and faculty members are getting useful, positive skills and information about the ethical conduct of research with human subjects. But it is important to remember that the data do not tell us what people learned—for example, what graduate students learned from their faculty mentors about ethics. Based on the number of screeds against the Institutional Review Board oversight, it is plausible that faculty members are training a new generation of cynics about the system. It is no doubt a broken system. But it is also worth training future researchers who recognize the complexities and the paradoxes of oversight.
2) It could be easy to assume that these American-based data represent universal concerns and problems with research integrity. But it’s important to bear in mind that some researchers—especially those outside of the USA—may have very different experiences. I reported on these data as part of a talk on “the graduate student paradox” in US ethics review for a Canadian-based Invitational Summit on Alternatives to Research-ethics Review in October. In my talk, I reported on previously unanalyzed ethnographic data I collected by interviewing and observing meetings of IRB members. In response, some board administrators outside of USA explained that they did not sense such a high pitch of animosity towards their review systems as the academic presenters (mostly American) had suggested. It is worth recognizing that treating the problems with the American system as a universal experience may actually misdirect attention from the most pressing issues endemic to other regulatory settings.
3) It could be easy to assume that the data represent the general modes of training at most institutions. But it’s also worth considering how averaging the respondents makes it hard to learn from the anomalous cases: universities operating very well or very poorly. It is the exceptional cases—both good and bad—that would be the most interesting to consider, and perhaps to learn from.