NPRM Symposium: When will regs be revised again? & Marcia Angell in NYRB

There is plenty to say about the proposed changes to the Common Rule that the Office of Human Research Protections announced in September. But it’s striking to consider what is left unsaid—that the regulations will need to be revised again in the near future.

When the final revisions are published (in late 2016, so they say), the rules should include a plan to revise again in ten years. This plan is already in place in for a few areas covered in the proposed revisions, but it is imperative that OHRP extend this requirement to the entire new rule.

Read More

Informed consent workshop today and tomorrow; papers available

The first session on “vulnerable populations” just wrapped up at the workshop on “Frontiers of Informed Consent” at Northeastern University in Boston—with great talks on migrant, refugee, and indigenous communities and conflicts between law, ethics, and logistics of informed consent.

Today and tomorrow, there will be talks and discussion on IRBs and community-based research; information, autonomy, and risk; securing informed consent; and confidentiality. In the spirit of a workshop on “applied ethics,” discussion has been energetic and, thankfully, useful. In part this is because the participants range from staff at NIH to members of community review boards.

Stop by or get the pre-circulated papers (email the organizers for the password).

Human Subjects Case Unfolds

Institutional Review Boards are in the top news at outlets such as the New York Times, as a research debacle unfolds. I looked through the documents that are publicly available to figure out what happened and what to expect.

Researchers at 22 universities or hospitals in the US enrolled premature babies in a randomized controlled trial between 2004 and 2009. This was the second part of a broader study, but the first part of the study “raised no concerns” according to the US Office of Human Research Protections on page 2 of its determination letter to the lead institution, University of Alabama-Birmingham. OHRP is the federal agency in charge of enforcing human-subjects regulation.

For the second part of the study, though, OHRP found that all 23 IRBs that approved the study (at 22 research sites) violated federal regulations: IRBs should have made researchers tell the parents that they knew their babies would be at higher risk of death, neurological damage, or blindness if they enrolled in the study (pages 2 and 10 of UAB letter). OHRP has only posted a determination letter for UAB at this point, but it explains that at all of the sites, the agency found violations with consent documents “similar to those described” to UAB. The UAB IRB is in especially hot water because it seems first to have first approved the 2.5-page template consent form, which the other institutions used (page 5). If you read the last page of UAB’s letter, you can make a good guess at who may officially be getting bad news from OHRP soon.

Read More

Common Rule Wrap-Up at National Academies

Working in private, the National Academy of Sciences’ panel on human-subjects regulations in social-behavioral sciences met this weekend to draft a final report.  On Friday, the panel had wrapped up its public “Workshop on Proposed Revisions to the Common Rule in Relation to Behavioral and Social Sciences.” The workshop aimed to critique OHRP’s proposed revisions to federal human-subjects regulations (known as the Common Rule), rather to critique the regulations directly.

Here is what the National Academy panel members said they took to be a few take-points from the public workshop, which I attended:

  • LOW-RISK: It’s essential to change regulations for lower-risk research, but the ANPRM does not currently set out a good way to do this. Few participants seemed keen on the new category of “excused,” nor did they like the current use of “exempt.” The key question to my mind is, How much autonomy, do the panelists think, should be handed over to scholar-investigators and taken away from IRBs? Speaker Lois Brako advocated requiring everyone to register their studies with their institutions. Other speakers (Brian Muskanski, Rena Lederman) suggested researchers should be given leeway to interpret abstract terms like “risk” and key moments such as when a study begins. Do panelists agree that scholar-investigators are trustworthy and knowledgeable enough to interpret regulations?
  • INTERNATIONAL: The Common Rule gives little attention to research outside the USA, and OHRP’s proposed revisions do not address this dangerous and retrograde gap. Pearl O’Rouke of Partners Healthcare and Thomas Coates of UCLA usefully emphasized this important point and showed the stakes. To my mind, the question for many researchers will be, How should cross-national differences—in institutions’ resources, in study populations—be taken into account in the regulations? Medical anthropologists, for example, are in the midst of a raging debate over this issue. The traditional view has been that we should respect local differences, and this was the original point of requiring IRBs to account for “community attitudes,” which has morphed into a big problem for multisite studies in the present day. The avant garde in medical anthropology suggests that such “ethical variability” is not just inhumane, but it indulges a western insistence on treating some people as “others” rather than as us—whether in the USA or abroad—which happens to be very convenient for drug developers. In my own research, IRB members also faced the more routine question of whether “community” meant a study population, local residents of a region, or something else altogether. The panel may not have time to consider whether it makes sense to clarify what “community” means and, more broadly, who gets to speak on behalf of a “community” regarding its attitudes.
  • PRIVACY: We have to come up with a system for reviewing social-behavioral research that is either more flexible or more refined. There is a wide range of appropriate protections, but they can quickly seem inappropriate if applied to some studies. Comparing a few of the presentations makes this point. George Alter explained the rigorous and necessary privacy protection plan for the big data sets and collaborative networks involved in University of Michigan’s ICPSR. On the flip side, Brian Mustanski and Rena Lederman explained the overweening attention to the so-called risks in their studies that involve first-hand interviews and observations.
  • EVIDENCE: We need more data on IRB outcomes. It is apparent that the data exist—as talks such as Lois Brako’s showed, in which she documented her team’s impressive overhaul of the IRB at University of Michigan, dysfunctional only a few years ago. The data need to be expanded, analyzed and shared—and supported for the long term. Who will have the money or time for that? That remains to be seen, but either way I will be curious to see the effects of the workshop buzz word: “evidence-based” decision-making. Although panelists saw value in case studies, it would be easiest for them and for policymakers to prioritize problems that can be documented with statistics rather than stories. I wonder, How might this skew the problems that are identified the people included in discussions?

Changing human-subjects regulations? Tune in now

Today and tomorrow, the National Academy of Sciences is hosting a workshop on revisions to the human-subjects regulations (the “Common Rule”), especially for rules on social and behavioral research. The workshop is being simulcast, and viewers can send in questions. Join us!

The most provocative presentation this morning, from my perch in the front row, was from Brian Mustanski, who studies adolescent health and risk behaviors–especially same-sex experiences. It’s an important topic to study because of the risk of HIV/AIDS transmission, among other things. But it’s tough for investigators to conduct studies on sex because the topic worries Institutional Review Boards (or researchers believe the topic will worry their IRBs). Sociologist Janice Irvine makes a similar argument in her survey of sex researchers.

Do IRBs need to be so worried? Mustanski and his colleagues asked the adolescents that they studied how comfortable the kids felt answering their sex survey. Around 70 percent felt either “comfortable” or “very comfortable” answering the sex questions–the implication being that it was silly for IRBs to think the questions posed more of a minimal risk. But his data also showed that 3 percent of the respondents felt “very uncomfortable.” He did not point out this finding, and so I asked Dr. Richard Campbell, another presenter, to weigh in on whether he would consider 3 percent to constitute a “large” or “likely” risk. Earlier Dr. Campbell had given a conceptual talk arguing that IRBs conflate the magnitude of risk with the likelihood of risk to participants. In answer to my question, Campbell said that making 2-4 precent of adolescents “very uncomfortable” would not constitute a large or likely risk, and so the research should go forward.

I imagine that IRB members of a more conservative bent would disagree–and this is the crux of the problem. In considering how to revise the human-subjects regulations, would it be more helpful to make the regulations more specific, for example by setting quantitative thresholds and standards that everyone would have to follow? Or would it be best to make the regulations more flexible? The regulations already give IRBs more discretion than they use. IRBs don’t use the flexibility in the regulations because they are always concerned about institutional liability. For IRBs, conversations about protecting human subjects from harm is simultaneously a conversation about protecting the institution from legal harm. IRBs would read surveys like Mustanski’s by seeing the few people who are uncomfortable rather than the majority of people who were entirely comfortable. Why? Because it only takes one lawsuit.

Is this regulatory contradiction too big for NAS? The debate in Washington continues.

New Data Reports on Learning “Research Integrity”

When it comes to research with human subjects, about 60 percent of faculty members and 50 percent of graduate students learned about ethics through online or print resources according to a recent survey. These data could be seen as good or bad news—depending on how you feel about getting your ethics through online training modules, such as CITI. These stats—and many more measures of ethics—are included in a remarkable new data set collected and made publicly available by the Council on Graduate Schools.

The data set is a great resource. Anyone with a browser can build custom tables that include different variables and topics related to “research integrity.” Users can slice data by fields of training (life sciences, social sciences, etc.) and by rank of researcher (faculty members, postdocs and graduate students).

Here is the punch line on human-subjects training—and a few questions about the data (the CGS has covered questions about methodology covered):

Read More

FDA Drug Amendments: Still a good fit at fifty?

Fifty years ago on Wednesday, President Kennedy signed into law the US Food and Drug Amendments. The amendments radically overhauled the way in which manufacturers brought drugs to market. Most importantly, the amendments instituted the four-phase review process and the requirement that manufacturers get informed consent from people receiving experimental drugs. If the past fifty years is any indication, though, its unlikely that FDA’s current regulations are well suited to deal with the changing context of medicine, including clinical trials of stem-cell therapies forecasted with the Nobel Prize Committee’s awarding of their prize in Physiology or Medicine earlier this week.

The amendments’ supporters had good intentions and the regulations have had positive effects overall. Yet the US government is still trying to redress many of their negative consequences. The rules have proven to be outmoded for new circumstances that policymakers did not have in mind when they created the amendments five decades ago.

The four-phase review process requires that manufacturers apply to the FDA and submit drugs for agency review three times—at least. One consequence of the four-phase review system is that it extended the time until consumers could access new therapies. This can seem a small price to pay to assure that drugs are safe and effective, a phrase that has become the slogan for the Amendments. People with new, fast-moving diseases, however, have seen the delay as a death sentence. For example, sociologist Steven Epstein has written extensively and carefully about the response to drug delays in the 1980s and 1990s among the HIV/AIDS activist community. The FDA has responded with changes, such as a fast-track approval system, but these shifts tend to come only in response to dire crises.

Read More