As most readers are probably aware, the past few years have seen considerable media and clinical interest in chronic traumatic encephalopathy (CTE), a progressive, neurodegenerative condition linked to, and thought to result from, concussions, blasts, and other forms of brain injury (including, importantly, repeated but milder sub-concussion-level injuries) that can lead to a variety of mood and cognitive disorders, including depression, suicidality, memory loss, dementia, confusion, and aggression. Once thought mostly to afflict only boxers, CTE has more recently been acknowledged to affect a potentially much larger population, including professional and amateur contact sports players and military personnel.
CTE is diagnosed by the deterioration of brain tissue and tell-tale patterns of accumulation of the protein tau inside the brain. Currently, CTE can be diagnosed only posthumously, by staining the brain tissue to reveal its concentrations and distributions of tau. According to Wikipedia, as of December of 2012, some thirty-three former NFL players have been found, posthumously, to have suffered from CTE. Non-professional football players are also at risk; in 2010, 17-year-old high school football player Nathan Styles became the youngest person to be posthumously diagnosed with CTE, followed closely by 21-year-old University of Pennsylvania junior lineman Owen Thomas. Hundreds of active and retired professional athletes have directed that their brains be donated to CTE research upon their deaths. More than one of these players died by their own hands, including Thomas, Atlanta Falcons safety Ray Easterling, Chicago Bears defensive back Dave Duerson, and, most recently, retired NFL linebacker Junior Seau. In February 2011, Duerson shot himself in the chest, shortly after he texted loved ones that he wanted his brain donated to CTE research. In May 2012, Seau, too, shot himself in the chest, but left no note. His family decided to donate his brain to CTE research in order “to help other individuals down the road.” Earlier this month, the pathology report revealed that Seau had indeed suffered from CTE. Many other athletes, both retired and active, have prospectively directed that their brains be donated to CTE research upon their death. Some 4,000 former NFL players have reportedly joined numerous lawsuits against the NFL for failure to protect players from concussions. Seau’s family, following similar action by Duerson’s estate, recently filed a wrongful death suit against both the NFL and the maker of Seau’s helmet.
The fact that CTE cannot currently be diagnosed until after death makes predicting and managing symptoms and, hence, studying treatments for and preventions of CTE, extremely difficult. Earlier this month, retired NFL quarterback Bernie Kosar, who sustained numerous concussions during his twelve-year professional career — and was friends with both Duerson and Seau — revealed both that he, too, has suffered from various debilitating symptoms consistent with CTE (but also, importantly, with any number of other conditions) and also that he believes that many of these symptoms have been alleviated by experimental (and proprietary) treatment provided by a Florida physician involving IV therapies and supplements designed to improve blood flow to the brain. If we could diagnose CTE in living individuals, then they could use that information to make decisions about how to live their lives going forward (e.g., early retirement from contact sports to prevent further damage), and researchers could learn more about who is most at risk for CTE and whether there are treatments, such as the one Kosar attests to, that might (or might not) prevent or ameliorate it.
Last week, UCLA researchers reported that they may have discovered just such a method of in vivo diagnosis of CTE. In their very small study, five research participants — all retired NFL players — were recruited “through organizational contacts” “because of a history of cognitive or mood symptoms” consistent with mild cognitive impairment (MCI). Participants were injected with a novel positron emission tomography (PET) imaging agent that, the investigators believe, uniquely binds to tau. All five participants revealed “significantly higher” concentrations of the agent compared to controls in several brain regions. If the agent really does bind to tau, and if the distributions of tau observed in these participants’ PET scans really are consistent with the distributions of tau seen in the brains of those who have been posthumously-diagnosed CTE, then these participants may also have CTE.
That is, of course, a lot of “ifs.” The well-known pseudomymous neuroscience blogger Neurocritic recently asked me about the ethics of this study. He then followed up with his own posts laying out his concerns about both the ethics and the science of the study. Neurocritic has two primary concerns about the ethics. First, what are the ethics of telling a research participant that they may be showing signs of CTE based on preliminary findings that have not been replicated by other researchers, much less endorsed by any regulatory or professional bodies? Second, what are the ethics of publishing research results that very likely make participants identifiable? I’ll take these questions in order.
Uncertain Diagnoses & Risk-Benefit Heterogeneity
On his blog, Neurocritic asks:
“What are the ethics of telling [Wayne Clark, the only one of the 5 participants who has experienced no symptoms except age-consistent memory impairment,] that he has ‘signs of CTE’ after a undergoing a scan that has not been validated to accurately diagnose CTE? It seems unethical to me. I imagine it would be quite surprising to be told you have this terrible disease that has devastated so many other former players, especially if your mood and cognitive function are essentially normal. . . . I could be wrong about all of this and maybe [their novel PET imaging agent] does provide a definitive diagnosis of CTE (the definition of which may need amending). But don’t you want to be sure before breaking the news to one of your patients?”
One of the most contentious current debates in the law and ethics of genetics and neuroimaging research is whether to offer to return individual research results (IRRs) to participants. Often, IRRs are of uncertain analytical and/or clinical validity, and they may not be clinically actionable. Some worry that returning such IRRs will simply burden individuals with scary, but uncertain and relatively useless, data. Others, by sharp contrast, view an offer to return “their data” to research participants as akin to a human right. I’ve tried to stake out a middle, participant-centered ground in this polarized debate.
On one hand, participants need to understand what they’re getting into when they join a study like this. Information, once learned, cannot be unlearned (thus, the relatively new concept of the “right not to know”). Among other things, Wayne Clark and the other participants should have been told (by which I mean, throughout, meaningfully made to understand) why they were recruited — namely, that their history of head trauma, combined with their MCI symptoms, made researchers suspect that they may well have CTE. In 64-year-old Clark’s case, it should have been made additionally clear to him that, although his only current symptom is age-appropriate memory loss, that investigators might come to suspect that this is a symptom of a neurodegenerative condition rather than normal aging. And all participants should have been told that they would effectively have no choice but to have their IRRs “returned” to them: a CTE study involving five retired NFL players, released shortly before the Super Bowl and amidst lots of media coverage about the future of contact sports was bound to go (and has gone) viral. Finally, they should have been told that virtually nothing can be concluded from a study of just five individuals with various additional design limitations. We can’t know, of course, whether the informed consent process in this case was adequate. Readers of the study are told that “[i]nformed consent was obtained in accordance with UCLA Human Subjects Protection Committee procedures” — and also told that UCLA owns the patent to the method used in the study, and that some of the investigators are inventors who stand to collect royalties. We should have additional concerns about informed consent, given that the participants by definition all suffer from some level of MCI.
That said, it is not inherently unethical to give people uncertain information — even when the information is potentially devastating and even if it’s not “clinically actionable.” Extremely inconvenient though it often is, life is filled with uncertainties. Information rarely carries with it tags that read 0% or 100%. This is about as true in medical practice, by the way, as it is in biomedical research — in part because huge swaths of “standard practice” are not evidence-based, for a variety of reasons; in part because even a solid evidence base is typically based on the effects of an intervention on narrowly selected research participants in highly controlled circumstances which may not generalize to individual patients in real life; and in part because medicine, even at its best, often remains probabilistic. So although most of us, most of the time, would prefer certainty to uncertainty, where certainty is out of reach, the question becomes whether it’s better, relative to the status quo ante, to obtain (additional) probabilistic information or not.
The answer is that it depends. Learning probabilistic information (here I assume that the study isn’t completely without probative value) about oneself can be risky. But it can also carry potential benefits. Just how risky and/or potentially beneficial it is — and whether this expected risk-benefit profile is “reasonable” (as IRBs must find) — depends on a variety of factors, most obvious among them the kind of information at issue, the degree of uncertainty, and — as I have been at pains to emphasize in my work — the individual’s preferences and circumstances. Sometimes people who suffer from MCI are relieved to learn that they may have a diagnosis, and perhaps a culprit, and that their symptoms aren’t mere figments of their imagination. Other participants, especially those who have lost friends to CTE, may feel so strongly that something needs to be done to advance our knowledge of CTE that they are willing to assume the risks of psychosocial discomfort and privacy invasions in order to contribute to that effort even in a small way.
Heterogeneity in stakeholder preferences implies a prima facie case against any one-size-fits-all law, policy, or ethical code governing risk-benefit trade-offs. (My forthcoming law review article on this “heterogeneity problem” in risk-benefit decision-making by central planners is here; a tl;dr version of some of the take-home points is here.) Sometimes, of course, one-size-fits-all is the best we can do in law and policy; but where we can improve upon it, especially with little or no cost, we should. The presence of heterogeneity tends to recommend private ordering, nudges, federalism, and ex post regulation (rather than ex ante licensing). You’ll find libertarians who are sympathetic to this line of argument, of course. But you’ll also find welfare liberals like Cass Sunstein agreeing (in his Storrs Lecture, no less) that “While some people invoke autonomy as an objection to paternalism, the strongest objections are welfarist in character. Official action may fail to respect heterogeneity . . . .” And so one answer to Neurocritic’s query about “the ethics” of revealing this information is that there is no singular “ethics” of this situation, at least not in terms of substantive outcomes, as opposed to an appropriate process for allowing individualized decision-making.
(Re)Identifiability of Research Data & Risk-Benefit Heterogeneity
Neurocritic’s second concern is about the privacy implications of participating in the CTE study. Of the five participants, two have spoken on the record to the media about the study — voluntarily, I’ll assume. One hopes that they were told that, even if they are okay with the public learning about their results, they can’t always control the way the public interprets those results. For instance, Wayne Clark’s Wikipedia page has already been updated to indicate, inaccurately, that “[a]fter his career, Clark was discovered to have chronic traumatic encephalopathy,” citing to an article whose headline declares breathlessly: “Scans show CTE in living ex-players; could be breakthrough.” (See also “Researchers find CTE in living former NFL players,” “Scientists discover ‘holy grail’ of concussion-linked CTE research,” and “Holy Grail Breakthrough in CTE Brain Damage Research.”) Scientists have a responsibility to carefully and accurately communicate all science, but especially sensitive or controversial science. They should go out of their way to avoid hype, and should affirmatively correct the record when necessary. When neuroscience is at issue, investigators should avoid brain porn — pretty pictures of brain scans designed to look as dramatically different from the “control” brain scan as possible, and which exploit our tendency to believe that being able to point to something in the brain makes it more “real” than otherwise. In this case, in addition to plenty of pretty pictures of brain scan, the journal article contains plots of nice-looking correlations between concussions and tau, but these graphics are easily misinterpreted, since results from just five observations will be very sensitive to the influence of outliers.
What of the other three participants, who have not been identified? They may nevertheless be identifiable, given the information about them that has been published in the journal article and in the press (e.g., age, position played in the NFL, concussion history, MCI symptoms). One can’t help but be reminded of another recent study, published in Science just a week or so before the CTE study appeared. That paper reported that computer informatics and genetics researchers were able to re-identify five men who had participated in both the 1000 Genomes Project — an international public-private consortium to sequence (as it turns out, 2500) genomes from “unidentified” people from about 25 world populations and place that sequence data, without phenotypic information, in an open online database — and a similar study of Mormon families in Utah, which did include some phenotypic information. Although this “DNA hacking” made a huge splash, the fact that de-identified genetic information can fairly easily be re-identified is not news; it’s happened before to research samples (although, importantly, always by researchers simply attempting to show that it can be done, rather than by actors with nefarious motives). NIH, which funds both public genetic databases, responded, as it had following a similar incident in 2008, by reducing the richness of the Utah dataset by eliminating the ages of participants to make re-identification more difficult. In this case, that was likely appropriate, since participants probably had consented to a different risk-benefit profile. But what to do going forward? Should participants be allowed to donate their data to open access science, knowing that ensuring anonymity is impossible? We can, of course, make research data available to only a limited circle of those with approved access, as is typically done. And we can render our datasets less and less rich, to reduce the risk of re-identification. But both privatizing and watering down data sets impede knowledge production.
A different — and neglected — approach is the one taken by the Personal Genome Project (PGP), led by Harvard Medical School geneticist George Church. The PGP posts on the Internet participants’ whole genome sequences (WGS), along with as rich a phenotype dataset as participants are willing to provide. The first ten participants (the PGP ultimately wants to recruit 100,000) identified themselves by name, occupation, and photo, and provided medical and other personal data. Since then, participants generally have not explicitly identified themselves by name, but they have agreed to make their DNA sequence and often huge amounts of personal information available to researchers and to the general public — all with the express understanding and agreement that their anonymity cannot be guaranteed. (Disclosure: I’m a PGP participant; indeed, my genome is being sequenced as I write.) Rather than making what are, it has for some time now been clear, fairly empty promises of de-identification, the PGP’s “open consent” model requires participants to be “information altruists.”
It is, perhaps, the idiosyncratic person such as myself whose net preferences yield a willingness to give such “open consent.” But these people do exist, they may be more numerous than many believe, and they have perfectly rational (if difficult to quantify) reasons to want to sacrifice their informational privacy, including altruism, intellectual curiosity, novelty, and a desire to be part of something bigger than themselves. To help ensure that these really are participants’ considered preferences, the PGP requires that prospective participants obtain a 100% score on a genetic test that includes questions about the limits of information privacy. Rather than Harvard’s IRB or a state or federal regulator imposing a one-size-fits-all privacy rule, this approach accommodates both heterogeneous risk-benefit preferences and heterogeneity among individuals in their comprehension of the study’s risks.
Were the five retired NFL players who participated in the CTE study knowing information altruists who gave open consent? I don’t know, because I don’t know what they were told and, of that, what they understood and appreciated. But I think they should have been allowed to be.
[Disclaimer: I am not involved in this, and the views expressed here are entirely my own.]
Cross-posted at Faculty Lounge.
 All neurodegenerative diseases can be diagnosed definitively only on autopsy. This is true, for instance, of Alzheimer’s. You likely know at least one person who has been diagnosed with Alzheimer’s while they were still living. That’s because, after much research, a professional consensus has been reached about the clinical diagnostic features of, and objective biomarkers for, Alzheimer’s which allow clinicians to make a differential diagnosis of “probable Alzheimer’s” as opposed to some other form of dementia. Any in vivo diagnostic for CTE would likely have implications for the (probably much bigger) Alzheimer’s diagnosis market.
 For a graphic description of this process, which suggests one reason why families often wrestle with the decision to permit their loved ones’ bodies to be donated to science, especially when the deceased hasn’t indicated his or her wishes, see a few paragraphs down in this article about the brain donation of hockey player Derek Boogaard, who was found to have had CTE.
 The investigators were led through “organization contacts” to 19 retirees known to have “MCI-like symptoms.” Of these, 11 were lost to “non-response or disinterest” [sic], 2 to being too young, and 2 to “medical illness.” This was not, then, a representative sample of professional football players, football players who have experienced concussions, or even football players who have experienced concussions and MCI-like symptoms. Moreover, investigators chose controls that were as similar as possible in relevant ways (e.g., age, BMI) to players but, of the 35 eligible controls, investigators chose 5 and averaged their PET scans, rather than averaging data from all 35 eligible controls — a potentially questionable decision to jettison statistical power.
 Neurocritic notes that tau deposits observed in the participants’ PET scans may not, in fact, match observed patterns of tau in deceased individuals diagnosed with CTE.
 As profiled in this recent New York Times piece, Neurocritic is one of a “gaggle of energetic and amusing, mostly anonymous, neuroscience bloggers — including Neurocritic, Neuroskeptic, Neurobonkers and Mind Hacks — [who] now regularly point out the lapses and folly contained in mainstream neuroscientific discourse.” If I recall correctly, I first got on Neurocritic’s radar back when Charlie Sheen was “winning.” I took his side in a Twitter war over the professional ethics of diagnosing celebrities. At the time, various people (Dr. Drew, I’m looking at you) were rushing before the television cameras to make all manner of “diagnoses” of Sheen’s mental health. No one who isn’t (a) medically qualified, (b) treats or knows the individual well, and (c) has said individual’s permission to discuss his diagnosis publicly has any business doing so. This is not a hard question. Neurocritic’s interlocutor argued that since there’s no shame in having mental health issues, there’s nothing wrong without “outing” someone. There should indeed be no shame in having mental health issues, which should be seen as on par with physical disabilities. But that is not remotely the world in which we live. Elyn Saks’s story is inspiring, and her willingness to share it — after tenure, in the way she chooses — is wonderful. But that’s her decision to make, not someone else’s. So I agreed then, and still agree now, with Neurocritic about the importance of sound diagnoses, of patient privacy, and generally of avoiding imposing upon individuals even accurate diagnoses when they are unwanted. The rest of this post explains why I think the present situation is — at least potentially — entirely different.
 Small world alert: PGP-10 member James Sherley is none other than “Sherley” from Sherley v. Sebelius.