Person receiving vaccine.

Why Do Differences in Clinical Trial Design Make It Hard to Compare COVID-19 Vaccines?

Cross-posted from Written Description, where it originally appeared on June 30, 2021. 

By Lisa Larrimore OuelletteNicholson PriceRachel Sachs, and Jacob S. Sherkow

The number of COVID-19 vaccines is growing, with 18 vaccines in use around the world and many others in development. The global vaccination campaign is slowly progressing, with over 3 billion doses administered, although the percentage of doses administered in low-income countries remains at only 0.3%. But because of differences in how they were tested in clinical trials, making apples-to-apples comparisons is difficult — even just for the 3 vaccines authorized by the FDA for use in the United States. In this post, we explore the open questions that remain because of these differences in clinical trial design, the FDA’s authority to help standardize clinical trials, and what lessons can be learned for vaccine clinical trials going forward.

Read More

a pill in place of a model globe

Monthly Round-Up of What to Read on Pharma Law and Policy

By Ameet SarpatwariBeatrice Brown, Neeraj Patel, and Aaron S. Kesselheim

Each month, members of the Program On Regulation, Therapeutics, And Law (PORTAL) review the peer-reviewed medical literature to identify interesting empirical studies, policy analyses, and editorials on health law and policy issues.

Below are the citations for papers identified from the month of August. The selections feature topics ranging from a commentary on the need for rigorous scientific evaluation of COVID-19 vaccine candidates in the face of political and economic pressures, to an evaluation of patients’ and pharmacists’ experiences with pill appearance changes, to an examination of the extent and cost of potentially inappropriate prescription drug prescriptions for older adults. A full posting of abstracts/summaries of these articles may be found on our website.

Read More

Good news for many South African HIV patients—with a big glitch

On Wednesday, South African Health Minister Aaron Motsoaledi announced that, as of January 2015, HIV-positive patients in the country would start receiving free antiretroviral treatment once their CD4 count fell below 500, instead of current threshold of less than 350. Some patient groups would start receiving antiretrovirals immediately upon being diagnosed with HIV infection, regardless of their clinical stage.

Last month, Till Bärnighausen, Dan Wikler and I predicted in PLoS Medicine that sub-Saharan nations would move in the direction that South Africa is now moving, and pointed out a big complication. This policy change might make several gigantic trials of so-called treatment-as-prevention in sub-Saharan Africa impossible to complete successfully. As we explained, these trials remain important for assessing the potential of treatment-as-prevention to curb the spread of HIV in general populations (with many different relationship types and different levels of care delivery and support).

In treatment-as-prevention, antiretrovirals are offered to patients immediately upon their diagnosis with HIV. The hope is that very early treatment would be better for these patients and prevent them from infecting others. We also offered some ways out of this mess, but they involve untraditional approaches to research conduct and to policy. Our piece was featured in the June issue of UNAIDS’ HIV This Month.

Times Report Models Worst Practices for Policy Research Reporting

By Scott Burris

I read the Times daily, and so naturally would like to be able to think it deserves to be regarded as a credible “newspaper of record.”  Today the paper outdid itself to disappoint, in a story by Sam Dolnick headlined “Pennsylvania Study Finds Halfway Houses Don’t Reduce Recidivism.” In the cause of making lemons from lemonade, I am drawing a list of “worst practices” from this little journalistic fender-bender:

Worst Practices Reporting Policy Evaluation Research (Provisional – come on readers, add your own.)

1. Don’t provide a title or author of the study or publication information on the study being described.

The story says only that it was conducted by the PA Department of Corrections and overseen by the state corrections department. There is a link later in the story to what turns out to be the Department’s annual report on recidivism. Not quite a “study” of halfway houses.

2. Don’t clearly describe the study.

The story does describe the study adjectivally – as “groundbreaking.”  It is, first of all, a bit of a stretch to call it a “study” at all. This is not the result of a systematic effort to explore the specific question of whether halfway houses work better than direct release to the street; it certainly was not a peer-reviewed or published study. Rather, the Times story is drawing on one section of an annual report produced by the state on recidivism among all prisoners released through all release mechanisms. The term “study” and the consistent suggestion that the study is important (“groundbreaking” results “so conclusive” that have “startled” leaders and experts) might lull the reader into believing that the “study” was well and deliberately designed to answer the question it supposedly posed – for example, a randomized, controlled and blinded trial of releasing prisoners directly to the street compared to halfway houses. Nope. This report is just a summary of outcome statistics, with a couple of paragraphs reporting in general terms on some statistical analysis meant to control for differences in the prisoners sent to halfway houses compared to those released to the street.

3. Just ignore the obvious problems for causal inference.

The plain and fundamental problem with pumping this study as powerful support for the claim that halfway houses don’t work is that we have no reason to be confident that the prisoners put into halfway houses are, as a group, the same as prisoners released directly to the street. It is elementary that statistical controls for observed differences cannot make up for a non-random, retrospective design that cannot also control for unobserved or unknown differences. Saying that this study “is casting serious doubt on the halfway-house model” is perhaps an attempt at caution, but way too weak a one. This study cannot cast serious doubt on anything, though it certainly points, as the report itself says, to worrisome outcomes in the halfway house system.

Read More