Part Six of Seven-Part Blog Series by Guest Blogger Patrick Taylor
Reading the NPRM and its government commentary, one is subtly, slowly led to a sense of inevitability. Arguments from abstract principles emerge, leave a footprint and then, in the wake of another tide of interests and arguments, another principal supplants them. But we are to believe that each previous footprint endures intact. There’s “autonomy,” said to require expanding opportunity to consent to honor individual preferences, overtrodden by scientific convenience, which demands just one-time consent, and suggests that world-changing choices to be privacy-bare may be irrevocable. There’s privacy demanding that information meet HIPAA deidentification standards at least some of the time; but there is some undisclosed vector requiring that there is no limit on who government may share your medical information with. Surrender to the illusion that these are not inconsistent, and the proposal is the best of all possible worlds, in which every inconsistent good is maximized and every tradeoff ignored. Surrender the illusion itself and one sees a mix of juxtaposed partial-prints going different directions, each incomplete.
The whole is like a floating castle in-the-air of creatively-linked abstract principles, moored by occasional assertions of fact representing tremendous leaps. The idea seems to be that public trust can be achieved at minimal inconvenience to researchers because either it is health care providers that will be responsible for seeking consent in the first place, or there will be a maximally permissive nationally uniform consent immune from the inconvenience of peer review and community members best positioned to consider its appropriateness in its local context. The content of that consent will be directly in the hands of highly placed political appointees of the ruling party, in what amounts to an extraordinary leap from institution-specific decentralization to the federal government. No need to stop for breath to consider the sovereignty of municipalities or the states; as North Carolina does to its cities and counties, with respect to local values, so the feds will do to every government in the country, including North Carolina. Some legal structures should be avoided where possible in a democracy; this one combines indifference to individuals’ values with indifference to health care providers’ values (whether secular or religious) and academic values, and IRB values, with indifference to the sovereignty of every government closer to the people than federal bureaucracies few people could name if asked, even if prompted by their acronyms (HHS, OHRP). All the scientists need to do, it seems, is exclude the nonsigners from all the tissuebanks and databanks that are the foundation of future research.
But the first principles of research ethics, respect for persons, beneficence and justice, which are used to justify this, are never deterministic. Those principles do not determine which comes first; they are principles that require thought – such as about the meaning of respect – not principles that excuse the lack of it.
Plus, confusing conceptual coherence with causality, the argument treats ethics like a magical spell to fulfill whatever is desired. Only one kind of ethics looks at consequences and tries to maximize happiness. The kind practiced here makes no such promises, it is principled, like the Ten Commandments. There is no promise one will be happy following them, a truth pithily summarized in the half-truth “nice guys finish last.” That does not mean that ethics lack causal impact; respect for rights can guide actions that will move mountains. But in themselves, they bring nothing about. Like Bishop Berkely’s famous tree, if a principle falls in a forest unnoticed, nothing has happened.
What is lacking here is any causal explanation that suffices to demonstrate that the proposed policy will be effective, at anything except exactly what it does. Health-related regulatory proposals typically have extensive factual data about the past, a causal theory justified by data concerning why the proposal will improve the situation, and some demonstration that if left alone the situation will be worse. Policies, though often rooted in principles, are no less about results. That is not so here. Often, huge changes will have been tried out locally or suggested by pilot programs, which have been carefully monitored for unintended consequences. Not so here. The effects of the consent proposal as a whole have never been tested. Some bits and pieces have. For example, we know that generally consent requirements introduce bias, and in this case we showed that the most visible exclusionary biases are whoppers. Blanket consents have been tested to see if unprompted people will know what they are giving up: they don’t.
This is not the first time government has yelled “Eureka! We have discovered the elusive secret to consent for research with clinical data and public trust!” In fact since 1991 it has offered at least seven authoritative, different answers. The first was the extant regulations to protect human subjects in certain federally-funded or -regulated research, permitting research on sets of anonymized medical records without patient consent, and establishing consent standards and institutional review board review for most other research. Anonymization means removing the most obvious identifiers in the copying of a clinical record into a research record. The next three versions were the HIPAA regulations adopted by the outgoing Clinton administration, and the substantially revised regulations the Bush administration replaced them with, now modified by the HI-TECH Act. They apply just to providers, payers and claims clearing houses. They relied on a new term of art, “deidentification,” which requires removal of a listed set of direct and indirect identifiers or a statistically justified determination that reidentification from analysis with reasonably available databases is extremely unlikely. They also introduced a new term, “authorization,” to refer to a patient’s permission to use or disclose identifiable information. The remaining official answers proposed modifications of research regulations, respectively an Advance Notice of Proposed Rulemaking in 2011, revisions to the Genomic Datasharing Policy in 2014, and the Notice of Proposed rulemaking released in September 2016.
Seven different answers cannot all be the inevitable, perfect solution.
What is the evidence the current NPRM proposal will bring about greater privacy protection? Ultimately, the NPRM’s approach is not about protecting patients at all. There is not one step that protects privacy more than the status quo; it’s about securing patients’ consents to what would otherwise be a humongous privacy violation, that the data will be accessible to any researcher from any sector for any research purpose. That means that in future, government can just ask for it. Government will also be the main source for handing it out. Research grants already require scientists to file their data with the government and government has already set up databases to hold it and distribute it.
If this were really about patient control, there would be a lot more possibilities being explored. Whether patients should control all researcher access is a complex question, but there is virtually no evidence that HHS considered anything beyond variations on a theme of open-ended unlimited consent, which incidentally serves the present eagerness to partner with the huge drug companies in an arrangement obliging the NIH to “freely share” patient records with them.
A close look at the question prompts many further questions. If patients should control, can they veto any use or only some, and how does one address the possibility that they become free riders on biomedical research, benefitting but risking and giving back nothing to the knowledge build-up that makes their own treatment possible. Should control be individual? Or should there be periodic popular votes, maybe within each hospital region or network? How broad should the consent be? Research consents often allow check-the-box opt-ins or opt-outs for whether research data can be shared for other research projects, or kept and banked by a forprofit sponsor indefinitely for long term analysis – typically the variables involve purposes and the researchers’ affiliation. Should patients be able to say yes to some and no to other researchers or purposes? Now, half the country would say no to multinational drug companies and the federal government if protecting privacy drives the answer. Any purpose? Should society honor racist, sexist or bizarre exceptions, and inflict on researchers the administrative task of tracking them? How about if an exception blocks lifesaving research? A complete analysis would go even farther, looking at the obligations of data recipients, revocability of the consent, what steps government had taken to protect privacy when a participant has given consent, and how the participant’s altruism should be reciprocated. It would also address what obligations and powers accessors have. If we do not consent, what then? Can we consent to only some or, stated differently, what is our power, through limitations on our consent, the limit future research? Parsing these questions further creates greater precision and more policy choices. For example, about “what access must we have the right to consent?” we might ask “and does it vary depending on access by whom, where, when or why?” But there’s nothing like that in the NPRM.
So what does the NPRM really do? The last post discusses Henrietta Lacks and Big Pharma.
Read Parts One, Two,Three, Four and Five of the ongoing Seven-Part Series.
“Health-related regulatory proposals typically have extensive factual data about the past, a causal theory justified by data concerning why the proposal will improve the situation, and some demonstration that if left alone the situation will be worse.”
This is an extraordinarily ahistorical statement. My sense of the history of health-related policy proposals is exactly the opposite. There is no extensive factual data. Thus, any causal theory is unjustified by any data and is, and always has been, the dream of some policy entrepreneur. THought experiments alone suggest whether the situation will be made better or worse. And policy becomes the instantiation of the very sort of research that regulation is meant to prevent – that is, uncontrolled experiments on unconsented subjects carried out by people who have a strong idea of what is good and right based on theory rather than fact. Can you give some reference to suggest that it has ever been otherwise. (I’m thinking here, specifically, of, say, Hill-Burton, Medicare, Medicaid, HMOs, DRGs, ACOs, and the ACA. I could easily think of others, but that’s a good list to start with.)