Live Blogging from FDA in the 21st Century Conference, Panel 2: Preserving Public Trust and Demanding Accountability

By Michelle Meyer

[This is off-the-cuff live blogging, so apologies for any errors, typos, etc]

First up is Mark Lange from Eli Lilly (who notes that he is here in his personal capacity only!), speaking about “Data Transparency and the Role of the FDA.”

He prefaces his talk by noting that when he refers to “data,” he means raw, patient-level data from clinical trials. Most calls for the transparency of such data, he says, reflect a common theme about lack of trust in the pharmaceutical industry. So we might wonder: why doesn’t the pharmaceutical industry simply accede to that request and make their data available?

Mark notes that industry has several concerns. One important one pertains to data exclusivity. In several (if not all) markets, data exclusivity rights are premised on keeping the relevant data confidential, and posting it publicly would be deemed a waiver of those rights. In addition, data exclusivity prevents generic competitors from free riding, and publishing data could allow them to circumvent the very point of data exclusivity.

Moving to privacy concerns, Mark notes that research subjects’ understanding is that their data will be used for particular purposes and shared with regulators, but not be publicly posted on the Internet for anyone to do with whatever they want. Relatedly, there is the potential for interpretation of public data to be biased; research results may be over-interpreted and analyses may be flawed or even erroneous. Competitors might look for fairly trivial flaws the the data and try to use them to their advantage rather than sincerely trying to advance scientific progress and transparency.

Mark suggests, however, the choice between privacy and transparency is a false one. A better alternative is available — namely, for objective, expert regulators such as the FDA to receive and vet data in ways that address both audiences and both sets of concerns. The FDA is in fact already experienced in doing this. For example, it determines whether research demonstrates that a drug is safe and effective for a particular use through its marketing application approval mechanism, and it determines the accuracy and adequacy of the portrayal of research results in product labeling and product advertisements. And late last year, it was given responsibility for overseeing, which includes results from all pre-specified primary and secondary outcomes measures from nearly all clinical trials either conducted in the U.S. or intended to be used in support of an application for marketing approval in the U.S. This new responsibility, Mark suggests, could be a powerful tool, depending on how the FDA uses it. For instance, the FDA could exercise authority to monitor and enforce the absence of required results and the inclusion of false or misleading results data.

In concluding, Mark stresses that, when faced with requests for public access to patient-level trial data, we should consider the important role of regulators as trusted intermediaries who can balance competing concerns. Read More

This American Life and Stigma

By Michelle Meyer

Update: This American Life has made a clarification. Please see this post for more.

Let me begin by saying how much I absolutely adore This American Life. I listen to it religiously. I particularly had been looking forward to the most recent pocast episode of TAL: Dr. Gilmer and Mr. Hyde. As the episode’s blurb teases, “Dr. Gilmer and Mr. Hyde” concerns a doctor — Benjamin Gilmer — who takes over the rural South Carolina practice of Vince Gilmer (no relation). Vince is no longer available to see patients because he is serving a prison sentence for killing his father. As Benjamin gets to know Vince’s — and now his — patients, he forms a picture of Vince that’s at odds with his status as a convicted father murderer. How could this doctor who was so devoted to his patients have so brutally murdered his own father?

This episode is right up my alley. True crime? Check. Forensic psychology? Check. The intersection of law and medicine? Yes, please. So when I awoke yesterday morning at 5 am and couldn’t go back to sleep, I eagerly cued up the podcast. The episode recounts, in TAL’s  typically-riveting fashion, the story of Benjamin’s search for the truth behind Vince’s murder of his father. I enjoyed every minute of the episode until the last five minutes or so, when I became troubled by one critical omission.

Spoilers  follow after the jump; listen to the episode first. Read More

R.I.P. Ronald Dworkin (Dec. 11, 1931–Feb. 14, 2013)

By Michelle Meyer

I woke this morning to the very sad news that legal philosopher and NYU law professor Ronald Dworkin died in London early this morning of leukemia, at the age of 81.

I’m not sure whether his illness was well known to those within the legal academy, but it came as news to me, so I confess I’m slightly shocked by news of his death. Others, of course, are much better positioned to give thoughts about his life and career, and no doubt will, here and elsewhere. I’ll share just one brief remembrance. I was the founding co-editor of the Harvard Law Review Forum, and for our very first issue, I solicited a response from Professor Dworkin to Fred Schauer’s (Re)Taking Hart. These were the days when online supplements to law reviews were new, and we didn’t really know how scholars would view these opportunities. When he readily agreed to provide a response, I recall emailing the news around Gannett, to much rejoicing. This was an especially meaningful “get” for me, as in addition to his work in legal philosophy, I had read and appreciated Life’s Dominion as an undergraduate studying bioethics. I was terribly nervous about interacting with him, but he was incredibly kind and gracious and unassuming throughout the process.

Professor Dworkin leaves behind his wife, two children, and two grandchildren. They and his friends and colleagues are in my thoughts.

Update: Brian Leiter is aggregating memorial notices here.

Are You Ready for Some . . . Research? Uncertain Diagnoses, Research Data Privacy, & Preference Heterogeneity

By Michelle Meyer

As most readers are probably aware, the past few years have seen considerable media and clinical interest in chronic traumatic encephalopathy (CTE), a progressive, neurodegenerative condition linked to, and thought to result from, concussions, blasts, and other forms of brain injury (including, importantly, repeated but milder sub-concussion-level injuries) that can lead to a variety of mood and cognitive disorders, including depression, suicidality, memory loss, dementia, confusion, and aggression. Once thought mostly to afflict only boxers, CTE has more recently been acknowledged to affect a potentially much larger population, including professional and amateur contact sports players and military personnel.

CTE is diagnosed by the deterioration of brain tissue and tell-tale patterns of accumulation of the protein tau inside the brain. Currently, CTE can be diagnosed only posthumously, by staining the brain tissue to reveal its concentrations and distributions of tau.[1] According to Wikipedia, as of December of 2012, some thirty-three former NFL players have been found, posthumously, to have suffered from CTE. Non-professional football players are also at risk; in 2010, 17-year-old high school football player Nathan Styles became the youngest person to be posthumously diagnosed with CTE, followed closely by 21-year-old University of Pennsylvania junior lineman Owen Thomas. Hundreds of active and retired professional athletes have directed that their brains be donated to CTE research upon their deaths. More than one of these players died by their own hands, including Thomas, Atlanta Falcons safety Ray Easterling, Chicago Bears defensive back Dave Duerson, and, most recently, retired NFL linebacker Junior Seau. In February 2011, Duerson shot himself in the chest, shortly after he texted loved ones that he wanted his brain donated to CTE research. In May 2012, Seau, too, shot himself in the chest, but left no note. His family decided to donate his brain to CTE research in order “to help other individuals down the road.” Earlier this month, the pathology report revealed that Seau had indeed suffered from CTE. Many other athletes, both retired and active, have prospectively directed that their brains be donated to CTE research upon their death.[2] Some 4,000 former NFL players have reportedly joined numerous lawsuits against the NFL for failure to protect players from concussions. Seau’s family, following similar action by Duerson’s estate, recently filed a wrongful death suit against both the NFL and the maker of Seau’s helmet.

The fact that CTE cannot currently be diagnosed until after death makes predicting and managing symptoms and, hence, studying treatments for and preventions of CTE, extremely difficult. Earlier this month, retired NFL quarterback Bernie Kosar, who sustained numerous concussions during his twelve-year professional career — and was friends with both Duerson and Seau — revealed both that he, too, has suffered from various debilitating symptoms consistent with CTE (but also, importantly, with any number of other conditions) and also that he believes that many of these symptoms have been alleviated by experimental (and proprietary) treatment provided by a Florida physician involving IV therapies and supplements designed to improve blood flow to the brain. If we could diagnose CTE in living individuals, then they could use that information to make decisions about how to live their lives going forward (e.g., early retirement from contact sports to prevent further damage), and researchers could learn more about who is most at risk for CTE and whether there are treatments, such as the one Kosar attests to, that might (or might not) prevent or ameliorate it.

Last week, UCLA researchers reported that they may have discovered just such a method of in vivo diagnosis of CTE. In their very small study, five research participants — all retired NFL players — were recruited “through organizational contacts” “because of a history of cognitive or mood symptoms” consistent with mild cognitive impairment (MCI).[3] Participants were injected with a novel positron emission tomography (PET) imaging agent that, the investigators believe, uniquely binds to tau. All five participants revealed “significantly higher” concentrations of the agent compared to controls in several brain regions. If the agent really does bind to tau, and if the distributions of tau observed in these participants’ PET scans really are consistent with the distributions of tau seen in the brains of those who have been posthumously-diagnosed CTE, then these participants may also have CTE.[4]

That is, of course, a lot of “ifs.” The well-known pseudomymous neuroscience blogger Neurocritic[5] recently asked me about the ethics of this study. He then followed up with his own posts laying out his concerns about both the ethics and the science of the study. Neurocritic has two primary concerns about the ethics. First, what are the ethics of telling a research participant that they may be showing signs of CTE based on preliminary findings that have not been replicated by other researchers, much less endorsed by any regulatory or professional bodies? Second, what are the ethics of publishing research results that very likely make participants identifiable? I’ll take these questions in order. Read More

Outsourcing the Up Goering of My Job Talk Paper to Forbes: Personalized Medicine, Personalized Regulation

By Michelle Meyer

So, one thing they say about being on the law teaching market is that you likely will never before have enjoyed — and, less happily, will likely never again enjoy — so much attention to your work and so many opportunities to discuss it. That’s totally true, and it’s totally fabulous. But there’s a flip side of that that they don’t tell you: after a while, you get burned out on talking about the same paper over and over again. You’ve likely moved on to other projects and are more excited about them, even if (or because) those projects build on your job talk paper. At this point in the process, your recitation of your job talk paper may have become rote and uninspired. You may, like me, have come to dread the act of rattling off your job talk paper’s thesis and why it matters.

And so it is that, having promised some months ago to blog my job talk paper on what I call the “heterogeneity problem” in research regulation, I have yet really to do so. I’ve blogged around the edges, to be sure (see, e.g., here, here, here, and here), but I can’t bring myself to explain the central thesis one more time. I also owe book editors (holla, Glenn and Holly!) a chapter on the challenges of heterogeneity for the growing global trend in “risk-based regulation” across many industries, and I’ve been procrastinating that, too, I think, largely because it requires me first to provide the reader with a précis of the heterogeneity problem. All of this is annoying, because there are lots of things that build on that central thesis that I’d like to write about, if only I could get over this strange aversion.

Enter physician-scientist David Shaywitz, whose overly kind piece yesterday in the Pharma & Healthcare section of, Personalized Regulation: More Than Just Personalized Medicine — And Urgently Required, highlights my work and, essentially, Up Goers it for me. It of course doesn’t cover all of the points I make in the paper, and in other ways it extends my thesis beyond what I defend in the paper, but it gives readers the gist. Thank you, David! (Let this also serve as supplemental answers to hiring committee questions about “What does your work have to do with the law?” and “Aren’t you ‘just’ a bioethicist whose work has no relevance for health or administrative law?”)

And now, with that out of the way, in my next post I’ll feel free to apply the heterogeneity problem to this question I was asked on Twitter. I can almost guarantee you that it will be my first and last post about football.

[Cross-posted at The Faculty Lounge]

The Risk of Revictimization and the Ethics of Covering School Shootings: What Journalists Can Learn from IRBs

By Michelle Meyer

Updated below

Like most parents, after learning about the latest mass school shooting this morning, my thoughts immediately went to my own kindergartener. And of course, like most reading this blog, I thought about how poorly we handle guns and mental illness. Before too long, though, I couldn’t help but make a less direct connection between today’s events and my scholarly interests. I’m thinking of the way journalists cover school shootings as compared to how we regulate human subjects research.

As I write in The Heterogeneity Problem, 65 Admin. L. Rev. __ at 14-16 (forth. June 2013):

Studies on sexual abuse and assault, grief, war, terrorism, natural disasters and various other traumatic experiences are critical to better understanding and addressing these phenomena. But exposure to trauma — whether as a survivor or as a first rescuer or other third party — often causes substantial psychological morbidity. . . . Given their potentially fragile state, IRBs understandably worry that “questioning [or otherwise studying] individuals who have experienced distressing events or who have been victimized in any number of ways . . . . might rekindle disturbing memories, producing a form of re-victimization.”

IRBs — local licensing committees who operate according to federal statute and regulation and must approve most studies involving humans before researchers can even approach anyone about possibly participating — sometimes impose burdensome requirements on the way trauma research is conducted in order to protect adult subjects from the risk of revictimization. And they do so in addition to applying regulations that require that researchers disclose that risk (and others) to subjects.

Contrast this with the way journalists cover trauma. Read More

Bleg: IRBs & Health Disparities Research

By Michelle Meyer

As most readers of this blog well know, health disparities of various kinds are rampant in the U.S. — in obesity, infant mortality and morbidity, cardiovascular health, and many other areas. In most cases, however, we seem to know more about the extent of health disparities than we do about what causes and what is most likely to ameliorate them.

To rectify this situation, we need to conduct research — and lots of it. Typically, however, health disparities research will have to occur with the same populations who are most likely to be considered vulnerable and in need of extra protections from research. Often, moreover, health disparities research will need to occur in the clinical setting (as opposed to the lab), where patients normally rightly expect that everything done there is designed to serve their individual best interests, rather than to produce generalizable knowledge. Health disparities research might involve research methodologies that are relatively unfamiliar to IRBs, such as community-based participatory research (CBPR), which blurs the traditional distinction between investigator and subject on which the regulations are built. To the extent that disparities are thought to derive from provider discrimination or bias, researchers may face difficulties from a research review system that is designed to protect all “subjects,” including professionals who are incompetent or worse. Eventually, health disparities research scales up to multiple research sites, which usually requires approval from multiple, often conflicting, IRBs. Many interventions to address health disparities, finally, will take the form of public policy rather than clinical treatment. If we want such policies to be evidence-based (and we should), they will have to be tested, perhaps in ways that raise legal or ethical issues (say, randomizing a state’s Medicaid recipients to receive or not receive particular benefits, or randomizing the businesses in a jurisdiction to be required to display nutrition information on the food they sell — or not).

I’m delighted to have received so many comments, both on- and offline, on my last IRB post from those with experience in the research trenches. As I begin a new project along these lines, I would be very interested in hearing again from both researchers and research reviewers with experience in health disparities research, whether you have struggled with these or similar issues (or have abandoned research plans at least partly out of fear of such problems), or have experienced smooth sailing. Feel free to leave comments here, anonymously if you wish, or contact me directly at mmeyer at law dot harvard dot edu. Many thanks in advance.

Exempt Research & Expedited IRB Review: Curb Your Enthusiasm

By Michelle Meyer

A while back, over at PrawfsBlawg, Martin Pritikin had a useful post collecting advice for legal academics looking to break into increasingly popular empirical legal studies (ELS). As Jeremy Blumenthal notes in the comments, Step 1 is to be sure to get IRB approval. This post addresses what I’ll call, with a nod to Cass Sunstein’s work on Chevron deference, IRB Step Zero: Determine whether your research needs IRB approval at all.

Don’t worry, it’s an easy step: As Jeremy’s plenary admonition to all wannabe ELS scholars implies, the answer is almost certainly Yes. Although the regulations in theory establish three risk-based tiers of review — human subjects research (HSR) otherwise subject to IRB review that the regulations nevertheless exempt; HSR that is eligible for expedited review; and HSR that requires review by a fully convened IRB (everything else) — in practice, the first two tiers tend to collapse into the third. In this sense, and now I borrow from Matthew Stephenson and Adrian Vermeule, IRB review has only one step.

A quick note of clarification: As I’ve noted before (here and here), several projects I have in the works, beginning with Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem, forthcoming next June in the Administrative Law Review, argue that we suboptimally regulate knowledge production. Just to be clear, my argument in that article doesn’t depend on my argument here about the broad scope of the regulations and their failed attempt to achieve risk-based levels of review.* Consider this post a public service for ELS types. That said, I draw here on The Heterogeneity Problem‘s background section, where interested readers will find the relevant citations.

Read More

Broadening “Innovation Law & Policy” (and “Human Subjects Research”)

By Michelle Meyer

In legal scholarship and education, innovation law and policy is virtually synonymous with intellectual property in general, and with patent law in particular. This is curious and, I think, misguided. We expend considerable effort designing optimal incentives for innovation. We expend similar effort ensuring that socially useful knowledge, once produced, is widely and accurately disseminated. But if knowledge-producing activities themselves are suboptimally regulated, neither upstream incentives to engage in them nor downstream mechanisms to disseminate their fruits will much matter.

In Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem, I

critically examine[] that regulatory framework, adopted by more than one dozen federal agencies in the U.S. and many other countries, which governs the vast majority of those knowledge-producing activities that have the greatest potential to affect human welfare: research involving human beings, or “human subjects research” (HSR). [The Article] focuses on the primary actors in the regulation of HSR — licensing committees called Institutional Review Boards (IRBs) which, before each study may proceed, must find that its risks to participants are “reasonable in relation to” its expected benefits for both participants and society. It argues for a particular interpretation of this risk-benefit standard and, drawing on scholarship in psychology, economics, neuroscience and other fields, argues that participant heterogeneity prevents IRBs from carrying out their regulatory duty. Instead, the regulatory system implicitly responds to the heterogeneity problem with risk aversion that is costly not only to researchers and society but, critically, to would-be research participants. The Article concludes by laying out the policy options that remain in the wake of the heterogeneity problem’s intractability: continuing the legal fiction of risk-benefit analysis, honestly embracing the heterogeneity problem and its costs, or jettisoning IRB risk-benefit analysis. A companion Article develops the possibility of the third option.

HSR is not, of course, unknown to the legal academy. Read More