Federal regulations and ethical principles require that Institutional Review Boards (IRBs) consider the anticipated risks of a proposed human research study in light of any potential benefits (for subjects or others) before granting authorization for its performance. This is required because, prior to the oversight required by regulation, unethical researchers exposed subjects to high degrees of risk without sufficient scientific and ethical justification.
Although the physical risks posed by clinical research are fairly well understood, so-called “informational risks”—risks of privacy breaches or violations of confidentiality— are the source of great confusion and controversy. How do you quantify the harm that comes from a stolen, but encrypted, laptop full of study data? Or the potential for embarrassment caused by observations of texted conversations held in a virtual chat room?
IRBs have for years considered the potential magnitude and likelihood of research risks in comparison to those activities and behaviors normally undertaken in regular, everyday life. But everyday life in today’s digital world is very different from everyday life in 1981 when the regulations were implemented. People share sonogram images on Facebook, replete with the kinds of information that would, in a research context, constitute a reportable breach under the Office of Civil Rights’ HIPAA Privacy Rule. They also routinely allow their identities, locations, and other private information to be tracked, stored, and shared in exchange for “free” computer applications downloaded to smart phones, GPS devices, and tablet computers.
Attitudes about privacy are changing. And there is a generational divide. Younger people, more comfortable with digital devices, still care about privacy; however, their understanding of the line between the public and the private is not drawn based upon the medium of exchange. While older people, who were not raised with a bottle in one hand and an iPad in the other, view digital transmission and storage of data as inherently risky and therefore more prone to pose a threat to privacy. Put simply, they ascribe greater security to hard paper copies in file cabinets than to anything that can be viewed on a screen.
Because IRBs tend to be comprised of faculty members at universities, many voting members share this generationally-specific skepticism about technology, which causes them to over-estimate the likelihood of harm when the medium of data collection, transmission or storage is digital. This can result in a disconnect between the way IRBs think about protecting potential subjects and the way those potential subjects think about their own needs for protection (or not).
In light of changing norms, IRBs need to re-think the ways in which they evaluate so-called “informational risks” Traditional ideas about protecting privacy in research are not compatible with the possibility that the very data collected in a study may have been disclosed already in a research subject’s Twitter feed. Why should information collected or created in a research context differ in its potential to harm the discloser from information disclosed via other fora? If it does not differ, then research data should not be regulated differently solely because of historical imperatives related to prior unethical acts.
It would be so helpful if we had some empirical data on what informational harms actually look like in the research context, instead of just conjecture about informational risks. Sometimes, I think people are similarly worried about genetic privacy without a clear sense of what exactly they are worried about.
Loving all the human subjects research posts, Sue!
It would be so helpful if we had some empirical data on what informational harms actually look like in the research context, instead of just conjecture about informational risks. Sometimes, I think people are similarly worried about genetic privacy without a clear sense of what exactly they are worried about.
Loving all the human subjects research posts, Sue!