By Scott Burris
I read the Times daily, and so naturally would like to be able to think it deserves to be regarded as a credible “newspaper of record.” Today the paper outdid itself to disappoint, in a story by Sam Dolnick headlined “Pennsylvania Study Finds Halfway Houses Don’t Reduce Recidivism.” In the cause of making lemons from lemonade, I am drawing a list of “worst practices” from this little journalistic fender-bender:
Worst Practices Reporting Policy Evaluation Research (Provisional – come on readers, add your own.)
1. Don’t provide a title or author of the study or publication information on the study being described.
The story says only that it was conducted by the PA Department of Corrections and overseen by the state corrections department. There is a link later in the story to what turns out to be the Department’s annual report on recidivism. Not quite a “study” of halfway houses.
2. Don’t clearly describe the study.
The story does describe the study adjectivally – as “groundbreaking.” It is, first of all, a bit of a stretch to call it a “study” at all. This is not the result of a systematic effort to explore the specific question of whether halfway houses work better than direct release to the street; it certainly was not a peer-reviewed or published study. Rather, the Times story is drawing on one section of an annual report produced by the state on recidivism among all prisoners released through all release mechanisms. The term “study” and the consistent suggestion that the study is important (“groundbreaking” results “so conclusive” that have “startled” leaders and experts) might lull the reader into believing that the “study” was well and deliberately designed to answer the question it supposedly posed – for example, a randomized, controlled and blinded trial of releasing prisoners directly to the street compared to halfway houses. Nope. This report is just a summary of outcome statistics, with a couple of paragraphs reporting in general terms on some statistical analysis meant to control for differences in the prisoners sent to halfway houses compared to those released to the street.
3. Just ignore the obvious problems for causal inference.
The plain and fundamental problem with pumping this study as powerful support for the claim that halfway houses don’t work is that we have no reason to be confident that the prisoners put into halfway houses are, as a group, the same as prisoners released directly to the street. It is elementary that statistical controls for observed differences cannot make up for a non-random, retrospective design that cannot also control for unobserved or unknown differences. Saying that this study “is casting serious doubt on the halfway-house model” is perhaps an attempt at caution, but way too weak a one. This study cannot cast serious doubt on anything, though it certainly points, as the report itself says, to worrisome outcomes in the halfway house system.