Good news for many South African HIV patients—with a big glitch

On Wednesday, South African Health Minister Aaron Motsoaledi announced that, as of January 2015, HIV-positive patients in the country would start receiving free antiretroviral treatment once their CD4 count fell below 500, instead of current threshold of less than 350. Some patient groups would start receiving antiretrovirals immediately upon being diagnosed with HIV infection, regardless of their clinical stage.

Last month, Till Bärnighausen, Dan Wikler and I predicted in PLoS Medicine that sub-Saharan nations would move in the direction that South Africa is now moving, and pointed out a big complication. This policy change might make several gigantic trials of so-called treatment-as-prevention in sub-Saharan Africa impossible to complete successfully. As we explained, these trials remain important for assessing the potential of treatment-as-prevention to curb the spread of HIV in general populations (with many different relationship types and different levels of care delivery and support).

In treatment-as-prevention, antiretrovirals are offered to patients immediately upon their diagnosis with HIV. The hope is that very early treatment would be better for these patients and prevent them from infecting others. We also offered some ways out of this mess, but they involve untraditional approaches to research conduct and to policy. Our piece was featured in the June issue of UNAIDS’ HIV This Month.

Times Report Models Worst Practices for Policy Research Reporting

By Scott Burris

I read the Times daily, and so naturally would like to be able to think it deserves to be regarded as a credible “newspaper of record.”  Today the paper outdid itself to disappoint, in a story by Sam Dolnick headlined “Pennsylvania Study Finds Halfway Houses Don’t Reduce Recidivism.” In the cause of making lemons from lemonade, I am drawing a list of “worst practices” from this little journalistic fender-bender:

Worst Practices Reporting Policy Evaluation Research (Provisional – come on readers, add your own.)

1. Don’t provide a title or author of the study or publication information on the study being described.

The story says only that it was conducted by the PA Department of Corrections and overseen by the state corrections department. There is a link later in the story to what turns out to be the Department’s annual report on recidivism. Not quite a “study” of halfway houses.

2. Don’t clearly describe the study.

The story does describe the study adjectivally – as “groundbreaking.”  It is, first of all, a bit of a stretch to call it a “study” at all. This is not the result of a systematic effort to explore the specific question of whether halfway houses work better than direct release to the street; it certainly was not a peer-reviewed or published study. Rather, the Times story is drawing on one section of an annual report produced by the state on recidivism among all prisoners released through all release mechanisms. The term “study” and the consistent suggestion that the study is important (“groundbreaking” results “so conclusive” that have “startled” leaders and experts) might lull the reader into believing that the “study” was well and deliberately designed to answer the question it supposedly posed – for example, a randomized, controlled and blinded trial of releasing prisoners directly to the street compared to halfway houses. Nope. This report is just a summary of outcome statistics, with a couple of paragraphs reporting in general terms on some statistical analysis meant to control for differences in the prisoners sent to halfway houses compared to those released to the street.

3. Just ignore the obvious problems for causal inference.

The plain and fundamental problem with pumping this study as powerful support for the claim that halfway houses don’t work is that we have no reason to be confident that the prisoners put into halfway houses are, as a group, the same as prisoners released directly to the street. It is elementary that statistical controls for observed differences cannot make up for a non-random, retrospective design that cannot also control for unobserved or unknown differences. Saying that this study “is casting serious doubt on the halfway-house model” is perhaps an attempt at caution, but way too weak a one. This study cannot cast serious doubt on anything, though it certainly points, as the report itself says, to worrisome outcomes in the halfway house system.

Read More