Bleg: IRBs & Health Disparities Research

By Michelle Meyer

As most readers of this blog well know, health disparities of various kinds are rampant in the U.S. — in obesity, infant mortality and morbidity, cardiovascular health, and many other areas. In most cases, however, we seem to know more about the extent of health disparities than we do about what causes and what is most likely to ameliorate them.

To rectify this situation, we need to conduct research — and lots of it. Typically, however, health disparities research will have to occur with the same populations who are most likely to be considered vulnerable and in need of extra protections from research. Often, moreover, health disparities research will need to occur in the clinical setting (as opposed to the lab), where patients normally rightly expect that everything done there is designed to serve their individual best interests, rather than to produce generalizable knowledge. Health disparities research might involve research methodologies that are relatively unfamiliar to IRBs, such as community-based participatory research (CBPR), which blurs the traditional distinction between investigator and subject on which the regulations are built. To the extent that disparities are thought to derive from provider discrimination or bias, researchers may face difficulties from a research review system that is designed to protect all “subjects,” including professionals who are incompetent or worse. Eventually, health disparities research scales up to multiple research sites, which usually requires approval from multiple, often conflicting, IRBs. Many interventions to address health disparities, finally, will take the form of public policy rather than clinical treatment. If we want such policies to be evidence-based (and we should), they will have to be tested, perhaps in ways that raise legal or ethical issues (say, randomizing a state’s Medicaid recipients to receive or not receive particular benefits, or randomizing the businesses in a jurisdiction to be required to display nutrition information on the food they sell — or not).

I’m delighted to have received so many comments, both on- and offline, on my last IRB post from those with experience in the research trenches. As I begin a new project along these lines, I would be very interested in hearing again from both researchers and research reviewers with experience in health disparities research, whether you have struggled with these or similar issues (or have abandoned research plans at least partly out of fear of such problems), or have experienced smooth sailing. Feel free to leave comments here, anonymously if you wish, or contact me directly at mmeyer at law dot harvard dot edu. Many thanks in advance.

Physician-Assisted Suicide in MA

Next Tuesday, those of us registered in Massachusetts will have the opportunity to vote on “Question 2” – prescribing medication to end life, otherwise known as physician-assisted suicide.  As described by the state secretary, “This proposed law would allow a physician licensed in Massachusetts to prescribe medication, at a terminally ill patient’s request, to end that patient’s life. To qualify, a patient would have to be an adult resident who (1) is medically determined to be mentally capable of making and communicating health care decisions; (2) has been diagnosed by attending and consulting physicians as having an incurable, irreversible disease that will, within reasonable medical judgment, cause death within six months; and (3) voluntarily expresses a wish to die and has made an informed decision.”  There are, of course, a number of other safeguards built in, such as the need to make the request twice, separated by 15 days, in the presence of witnesses.  However, there could probably be stronger safeguards to protect individuals who are experiencing depression and anxiety, and might have preferable alternatives to physician-assisted death.

The proposed law is similar to measures already in place in Oregon and Washington state, where statistics show relatively low uptake and certainly not the sort of slippery slope that critics seem to be worried about.  In today’s NY Times, however, Zeke Emanuel describes 4 myths about physician-assisted suicide that might give some pause to people like me who plan to vote “Yes” on Question 2.  In the end, though, it strikes me that preserving room for maximal choice in these difficult end-of-life situations is for the best.

Without delving into the merits, which has been done very well elsewhere, let me just make a quick note about something else that struck me re: Question 2, which was the pamphlet of materials I received at home about the ballot measure.  It came from the state secretary, had an excellent, understandable summary of the law and what it would do, and included brief statements for and against written by selected advocates.  I thought this was an incredible mechanism to promote informed voting and deliberative democracy – and because I always have human subjects research ethics on the brain, it made me think of the possible ways this approach could be adapted to improve informed consent.  Perhaps traditional consent forms could be accompanied by a brief neutral statement about a study from the IRB, followed by short statements pro and con about the decision to participate. Just a thought.

And finally, one more note: we’re having a bioethics-heavy election day in Massachusetts this year.  Question 3 is about whether we should eliminate state criminal and civil penalties for the medical use of marijuana by qualifying patients.


Conflicting Interests in Research: Don’t Assume a Few Bad Apples Are Spoiling the Bunch

by Suzanne M. Rivera, Ph.D.

In August of 2011, the Public Health Service updated its rules to address the kind of financial conflicts of interests that can undermine (or appear to undermine) integrity in research.  The new rules, issued under the ungainly title, “Responsibility of Applicants for Promoting Objectivity in Research for which Public Health Service Funding is Sought and Responsible Prospective Contractors,” were issued with a one-year implementation period to give universities and academic medical centers sufficient time to update their local policies and procedures for disclosure, review, and management (to the extent possible) of any conflicts their researchers might have between their significant personal financial interests and their academic and scholarly activities.

The rules were made significantly more strict because a few scoundrels (for examples, click here, and here) have behaved in ways that undermined the public’s trust in scientists and physicians. By accepting hundreds of thousands, even millions, of dollars from private pharmaceutical companies and other for-profit entities while performing studies on drugs and devices manufactured by the same companies, a few bad apples have called into question the integrity of the whole research enterprise.  This is a tremendous shame.

Having more than one interest is not bad or wrong; it’s normal.  Everyone has an attachment to the things they value, and most people value more than one thing.  Professors value their research, but they also want accolades, promotion, academic freedom, good parking spots, and food on their tables.  Having multiple interests only becomes a problem when the potential for personal enrichment or glory causes someone (consciously or unconsciously) to behave without integrity and compromise the design, conduct, or reporting of their research. Read More

Fear of a Digital Planet

by Suzanne M. Rivera

Federal regulations and ethical principles require that Institutional Review Boards (IRBs) consider the anticipated risks of a proposed human research study in light of any potential benefits (for subjects or others) before granting authorization for its performance.  This is required because, prior to the oversight required by regulation, unethical researchers exposed subjects to high degrees of risk without sufficient scientific and ethical justification.

Although the physical risks posed by clinical research are fairly well understood, so-called “informational risks”—risks of privacy breaches or violations of confidentiality— are the source of great confusion and controversy.  How do you quantify the harm that comes from a stolen, but encrypted, laptop full of study data?  Or the potential for embarrassment caused by observations of texted conversations held in a virtual chat room?

IRBs have for years considered the potential magnitude and likelihood of research risks in comparison to those activities and behaviors normally undertaken in regular, everyday life.  But everyday life in today’s digital world is very different from everyday life in 1981 when the regulations were implemented.  People share sonogram images on Facebook, replete with the kinds of information that would, in a research context, constitute a reportable breach under the Office of Civil Rights’ HIPAA Privacy Rule.  They also routinely allow their identities, locations, and other private information to be tracked, stored, and shared in exchange for “free” computer applications downloaded to smart phones, GPS devices, and tablet computers. Read More

Exempt Research & Expedited IRB Review: Curb Your Enthusiasm

By Michelle Meyer

A while back, over at PrawfsBlawg, Martin Pritikin had a useful post collecting advice for legal academics looking to break into increasingly popular empirical legal studies (ELS). As Jeremy Blumenthal notes in the comments, Step 1 is to be sure to get IRB approval. This post addresses what I’ll call, with a nod to Cass Sunstein’s work on Chevron deference, IRB Step Zero: Determine whether your research needs IRB approval at all.

Don’t worry, it’s an easy step: As Jeremy’s plenary admonition to all wannabe ELS scholars implies, the answer is almost certainly Yes. Although the regulations in theory establish three risk-based tiers of review — human subjects research (HSR) otherwise subject to IRB review that the regulations nevertheless exempt; HSR that is eligible for expedited review; and HSR that requires review by a fully convened IRB (everything else) — in practice, the first two tiers tend to collapse into the third. In this sense, and now I borrow from Matthew Stephenson and Adrian Vermeule, IRB review has only one step.

A quick note of clarification: As I’ve noted before (here and here), several projects I have in the works, beginning with Regulating the Production of Knowledge: Research Risk-Benefit Analysis and the Heterogeneity Problem, forthcoming next June in the Administrative Law Review, argue that we suboptimally regulate knowledge production. Just to be clear, my argument in that article doesn’t depend on my argument here about the broad scope of the regulations and their failed attempt to achieve risk-based levels of review.* Consider this post a public service for ELS types. That said, I draw here on The Heterogeneity Problem‘s background section, where interested readers will find the relevant citations.

Read More

Weighing Risks in a Pediatric Anthrax Vaccine Clinical Trial

by Brendan Abel, JD

Countless regulations have been enacted over the past 35 years to protect children from unnecessary clinical testing. Federal regulations, the Belmont Report, and professional guidelines all state that children should be enrolled in clinical trials only when the research is a high imperative. Federal research regulations insist that absent a potential for direct benefit to the participating child, research should take place only if there is “minimal risk” or “minor increase over minimal risk.”

Thus, it was surprising that one year ago, the National Biodefense Science Board (NBSB), an advisory panel to the Secretary of Health and Human Services, recommended that HHS develop and implement a study of pre-exposure anthrax vaccine in pediatric populations. Such a vaccine would subject children to risks with little potential for therapeutic benefit. The matter is now in front of the President’s Commission for the Study of Bioethical Issues, at the request of Secretary Kathleen Sebelius, who in May visited the Commission to ask for advice regarding the ethical issues raised by this potential study. The  Commission’s recommendation is expected early next year.

Much of the controversy surrounding anthrax-vaccine testing in the pediatric population relates to the issue of timing. Since the vaccine can be used either prophylactically or as a post-exposure treatment, the government is considering whether children should be tested now to determine safety, efficacy, and dosing levels in a structured, controlled environment, or whether it is best to avoid subjecting children to the risks of the testing and to face an (unlikely) anthrax attack without the knowledge that would be gained from such a study.

Read More

Reporting Information about Clinical Trial Data: Passing the Torch from HHS to the FDA

By Leslie Francis

In 2007, motivated by concerns that pharmaceutical companies were not sharing negative data about what had been learned in clinical trials, Congress established enhanced reporting requirements.

A series of articles published in January 2012 in the British Medical Journal demonstrates that data reporting remains deeply problematic, especially for industry-sponsored trials. (The articles can be found here and are very much worth reading).

A posting Sept. 26 in the Federal Register indicates that the Secretary of HHS has delegated authority to to oversee the reporting process to the FDA.  Whether this signals improved monitoring of clinical trial data submissions remains to be seen.  One can only hope.

PCSBI: Privacy and Progress in Whole Genome Sequencing

Yesterday, President Obama’s Commission for the Study of Bioethical Issues released its fifth report: Privacy and Progress in Whole Genome Sequencing.  I haven’t had a chance to digest it yet, but for now, just wanted to call it to everyone’s attention.  The gist seems to be privacy, privacy, privacy.

Here are the major recommendations, straight from the Commission’s “mouth”:

Recommendation 1.1
Funders of whole genome sequencing research; managers of research, clinical, and commercial databases; and policy makers should maintain or establish clear policies defining acceptable access to and permissible uses of whole genome sequence data. These policies should promote opportunities for models of data sharing by individuals who want to share their whole genome sequence data with clinicians, researchers, or others.

Recommendation 1.2
The Commission urges federal and state governments to ensure a consistent floor of privacy protections covering whole genome sequence data regardless of how they were obtained. These policies should protect individual privacy by prohibiting unauthorized whole genome sequencing without the consent of the individual from whom the sample came.

Read More

FDA Drug Amendments: Still a good fit at fifty?

Fifty years ago on Wednesday, President Kennedy signed into law the US Food and Drug Amendments. The amendments radically overhauled the way in which manufacturers brought drugs to market. Most importantly, the amendments instituted the four-phase review process and the requirement that manufacturers get informed consent from people receiving experimental drugs. If the past fifty years is any indication, though, its unlikely that FDA’s current regulations are well suited to deal with the changing context of medicine, including clinical trials of stem-cell therapies forecasted with the Nobel Prize Committee’s awarding of their prize in Physiology or Medicine earlier this week.

The amendments’ supporters had good intentions and the regulations have had positive effects overall. Yet the US government is still trying to redress many of their negative consequences. The rules have proven to be outmoded for new circumstances that policymakers did not have in mind when they created the amendments five decades ago.

The four-phase review process requires that manufacturers apply to the FDA and submit drugs for agency review three times—at least. One consequence of the four-phase review system is that it extended the time until consumers could access new therapies. This can seem a small price to pay to assure that drugs are safe and effective, a phrase that has become the slogan for the Amendments. People with new, fast-moving diseases, however, have seen the delay as a death sentence. For example, sociologist Steven Epstein has written extensively and carefully about the response to drug delays in the 1980s and 1990s among the HIV/AIDS activist community. The FDA has responded with changes, such as a fast-track approval system, but these shifts tend to come only in response to dire crises.

Read More

Upcoming Event – Frank Miller on Placebo-Controlled Trials Before Informed Consent

What were they thinking? Placebo-controlled trials before informed consent

Franklin G. Miller, Ph.D., Department of Bioethics, National Institutes of Health

October 25, 2012 – 4pm

Shapiro Clinical Center JCRT 5A (East Campus), Beth Israel Deaconess Medical Center, Boston, MA

Sponsored by the Program in Placebo Studies and the Therapeutic Encounter; Martinos Center for Biomedical Imaging; MGH; BIDMC Division of General Medicine and Primary Care; HMS Department of Global Health and Social Medicine

For more information call 617-945-7827