By James Love
Vinay Prasad and Sham Mailankody’s JAMA Internal Medicine study on the costs of research and development (R&D) when bringing a single cancer drug to market has sparked renewed discussion about how to measure R&D costs as well as the relationship between R&D costs and prices. What follows is my perspective on the Prasad/Mailankody (PM) paper, how it compares to DiMasi’s widely quoted 2016 study, and on the debate in general.
PM identify ten companies that put a single cancer drug on the market from 2007 to 2015, as the first successful product for each company. The authors use Securities and Exchange Commission (SEC) reports for investors to report total R&D outlays by the company, on that drug plus any other compounds the company was pursuing, from when the authors reckoned the development of the successful product began until its approval by the FDA.
The reported R&D outlays ranged from $157 million to $1.95 billion, with an average of $720 million and a median of $648 million. After capital costs of 7% were added, the average and median costs were $905 million and $757 million, respectively.
Nine of the 10 products in the PM study were approved for an Orphan Drug indication. The authors made no attempt to estimate the value of the 50% orphan drug tax credit for qualifying trials.
Immediately there was a comparison to DiMasi, et al’s 2016 paper, including, in particular, their estimate of the costs of drug development — $2.588 billion in 2013 dollars, $2.7 billion today,. Let’s look at some differences in the two studies.
The PM study identified ten cancer drugs from small publicly traded companies registering their first product, and used data on R&D outlays from SEC reports for investors.
The SEC reports typically do not provide consistent separation of costs by R&D stage or product.
DiMasi used data from 10 “midsized and large pharmaceutical firms” and a confidential and proprietary survey on clinical expenditures, on a more diverse set of diseases.
Without project level data on pre-clinical outlays, DiMasi made a consequential assumption that for every drug, 44.5 cents was spent on pre-clinical outlays for every dollar spent on clinical outlays (expressed by DiMasi as a constant 30.8% ratio of pre-human to total R&D spending).
DiMasi reported costs by Phase 1, 2 and 3 trials, which in his sample were an average of $339.3 million, and a median of $262.1 million. Everything else in the DiMasi study depended upon the trial costs, which, adjusted for the risks of failures, became $965 million. The pre-clinical expenditures were then estimated at $430 million (44.5% of $965 million in risk adjusted clinical outlays). Together this added up to $1.395 billion. DiMasi then adds an aggressive allowance for capital costs, bring the total to $2.558 billion.
DiMasi’s risk adjusted outlays per approved product were $1 395 million, and this for big companies and all drugs. PM’s equivalent figure was $720 million, for smaller firms, and limited to ten cancer drugs, nine of which were approved for orphan indications. Given the differences in company size and orphan drug status alone, the lower estimates from PM were not surprising.
An area where the DiMasi and PM estimates diverge more significantly are the costs of capital. PM’s adjustment for capital costs were fairly modest. DiMasi’s estimate for capital costs was massive — $1.163 billion, most of which ($668 million) was attributed to the assumed pre-clinical outlays for which he had no real data. The differences were due to DiMasi’s higher rate of return and assumptions regarding the timing of outlays on clinical and pre-clinical research.
Why do R&D costs matter?
After the PM paper was published, there was a flurry of commentary in JAMA, on Twitter and in various news accounts and blogs, such as this one by the CEO of BIO. One debated question was, do R&D costs matter for prices, or policies?
There is plenty of evidence that companies ignore actual R&D outlays when they set prices. Companies charge what the market can bear, and that depends upon many factors, but never on a backward look at the sunk costs of R&D, a point made by Hank McKinnell, the former CEO of Pfizer and countless others. That said, policies that support high prices are very much influenced by perceptions of R&D costs, and for that reason, estimates are surprisingly contested and political.
Disputes over what is could be resolved by more transparency of R&D outlays, but policies to require disclosures have been bitterly resisted by drug companies.
And, while many policy makers and journalists want a single number (like DiMasi’s $2.7 billion or PM’s $720 million figure) the details and the context are important. In general, trial sizes, complexity, duration, distribution of investment by phase, and role of government or charity R&D subsidies, etc., vary so widely that an estimated average for an entire industry can be misleading or irrelevant.
Consider a drug like Spinraza, a $750,000 (for the 1st year) drug for a rare disease, invented on government grants, and put on the market with fewer than 500 patients in trials that qualified for a 50% Orphan Drug Tax credit. If the risk-adjusted R&D costs were less than $40 million, as KEI has calculated, how does either the DiMasi or the PM estimate of an average help, other than to give the wrong impression and undermine the credibility of the $40 million estimate to an uninformed audience?
There are all sorts of issues and challenges in measuring R&D costs. It is easier to measure and verify risks and investments in human clinical trials. Pre-clinical risks and allocations to specific projects are more of a black box. R&D expenditures reported to shareholders mix in spending on conducting experiments with the costs of acquiring rights to compounds, which are driven by the expectations of prices for future products.
In general, disaggregation is helpful. The questions researchers, policymakers and stakeholders ask are different, and require access to different facts. It’s easier to evaluate and verify the risk-adjusted costs of trials when you have outlays for each trial separately, by year, as well as the enrollment for the trials, than when you have simply been told to blindly accept company claims to have spent a billion (or two) dollars.
The PM study has its limitations, noted by the authors and others, but it was also transparent. DiMasi has consistently refused to disclose even some basic facts about the sample of trials in his study. We don’t have enrollment or per-patient costs for trials, even though both were disclosed in his earlier 2003 paper, and his estimates for the Phase 1-3 trials increased from $125 million to $339 million, while the number of FDA-approved orphan products has soared, particularly for cancer treatments.
Years ago, the NIH National Cancer Institute used to publish annual data on its costs per trial and per patient, but today the NIH has resisted efforts to share information on the costs it incurs in conducting trials, or the role of federal subsidies, even when the NIH is licensing patents to the companies.
It is time for much greater transparency of R&D costs, in as many areas as possible. We should know what companies are spending money on, by disease and product, by stage of development, and on each trial. The Orphan Drug Tax Credit subsidy should be as transparent as a government grant. The NIH should publish its Cooperative Research and Development Agreements (CRADAs) and every government funded grant should include requirements for public disclosure of development costs and the details of all licensing agreements. We should no longer have a system where the only transparency is that material to investors or behind paywalls only accessible to businesses under an NDA.
Knowing more about R&D costs is necessary to reform both drug pricing and the complex system of incentives that are necessary to induce private investments in R&D. We have to stop operating behind a veil of deliberate ignorance and stop acting as if it is okay for facts to be political and contested when they can be resolved.