By Kate Greenwood
[Cross-posted at Health Reform Watch]
As I have blogged about before, last year, in Kaiser v. Pfizer, the First Circuit joined the handful of courts to have approved a causal chain of injury running from a pharmaceutical company’s fraudulent promotion, through the prescribing decisions of thousands of individual physicians, to the prescriptions for which a third-party payer paid. To establish but-for causation in the case, Kaiser submitted an expert report and testimony from Dr. Meredith Rosenthal, a health economist at the Harvard School of Public Health. Dr. Rosenthal conducted a regression analysis to determine the portion of physicians’ prescribing of the drug Neurontin that was caused by the defendant’s fraudulent promotion, arriving at percentages ranged from 99.4% of prescriptions for bipolar disorder to 27.9% of prescriptions for migraine.
Pfizer argued that Dr. Rosenthal’s regression analysis should not have been admitted (and at least suggested that such an analysis should never be admitted in a third-party payer case) because regression analysis could not “take into account the patient-specific, idiosyncratic decisions of individual prescribing physicians.” Dr. Rosenthal’s report, the company argued, “merely demonstrated ‘correlation’ and not ‘causation.’” The First Circuit disagreed, upholding the lower court’s determination that the challenged evidence was admissible under Federal Rule of Evidence 702, because “regression analysis is a well-recognized and scientifically valid approach to understanding statistical data” and because it “fit” the facts of the case.
Eric Alexander, a partner at Reed Smith, made a similar argument to Pfizer’s when he critiqued a decision issued in July in a third-party payer case in the Eastern District of Pennsylvania. Writing at the Drug and Device Law blog, Alexander criticized the court for failing to address “the fundamental—to us—issue of whether an economist [Dr. Rosenthal was the plaintiff’s expert in that case, too] can ever determine why prescriptions were written.” Alexander points out that “[t]o get to millions of dollars of revenue from prescriptions, many physicians have to prescribe the drug to many patients[,]” and those physicians can “pretty much do what they want[.]” Economists, Alexander argues, should not be allowed to by-pass this complexity and simply “assume” causation.
I would argue that, as idiosyncratic as physician decision-making may be, it is not uniquely so. As the First Circuit noted in Kaiser v. Pfizer, “courts have long permitted parties to use statistical data to establish causal relationships” in antitrust, employment discrimination, and other types of cases. In their article The Use and Misuse of Econometric Evidence in Employment Discrimination Cases, which is forthcoming in the Washington and Lee Law Review, Joni Hersch and Blair Druhan explain that plaintiffs have used regression analyses in employment discrimination cases for more than thirty-five years, to establish a prima facie case of disparate treatment or disparate impact. Plaintiffs in these cases use such analyses to “show that, all other qualifications equal, being a member of a protected class decreased the plaintiff’s expected wage or likelihood of receiving a promotion or being hired.” In class action cases, plaintiffs can also use regression analyses “to establish commonality between the members of the class as required by statute.”
In their article, Hersch and Druhan evaluate the three most common challenges to regression analyses—that they “suffer from omitted variables, a small sample size, and a lack of statistical significance”—and explain that there are “very few circumstances” under which these challenges are meritorious. The authors go on to describe the results of a regression analysis they performed to try to understand the consequences of courts’ considering econometric critiques in a sample of employment discrimination cases. Their analysis revealed that “if [an] opinion mentions any of the econometric critiques … then the plaintiff is 28.3 percentage points less likely to have a favorable result.” This is concerning in light of the examples Hersch and Druhan present of courts that are not “aware of the tricks that expert witnesses argue when attempting to impugn the reliability of valid statistical evidence presented by plaintiffs.” Hersch and Druhan recommend that “court[s] exercise [their] gatekeeping function by either acting under Daubert or establishing a peer-review system to guarantee that only valid challenges to regression results enter the courtroom.”
Hersch and Druhan suggest that their article could be helpful to judges evaluating statistical evidence in employment discrimination cases; I think its usefulness extends further. As plaintiffs continue to turn to regression analyses to establish causation and injury in third-party payer cases brought against pharmaceutical companies, courts will continue to be challenged to evaluate the reliability of such analyses, and to evaluate the reliability of defendants’ challenges to them. Hersch and Druhan’s article can help. Causation is causation, and doctors are no more—in fact, one would hope they would be less—idiosyncratic than employers.