For more information, visit our website.
By Nir Eyal
Young people often assume that their life expectancies will remain what life expectancy is today—although life expectancy will grow by the time they are old. When all of us imagine our futures, we often neglect to take account of radical technologies which were not foreseeable prior to their invention, just as the internet wasn’t, and instead imagine something closer to present realities. We do not appreciate how different the future could be.
Among other things, the future might be dangerous to humankind in ways that we fail currently to appreciate. We got lucky, and nuclear energy seems hard for individuals to develop at home, but will that last, and will new WMDs be impossible to replicate with 3D printers or other future technologies? Can anyone guarantee that viruses manufactured for scientific research will not be spread by error or terror? Can we guarantee that robots designed for contained military purposes would not go out of control? Or that once artificial intelligence is advanced enough to design other artificial intelligence, humans will remain safe for long? Some of the greatest dangers to our species are unknown, simply because the technologies that create them have not been invented yet—just as many technologies that exist and threaten us today were not invented 100 years ago.
In a multimedia presentation that drew a prolonged applause from a crowd of Harvard undergraduates, Estonian programmer Jaan Tallin wove together three stories: the story of Kazaa and Skype, which he helped start; his personal journey into studying and promoting the study of existential risk; and a “sermon” (as he put it, tongue in cheek) on the ethical responsibilities of technology developers.
Tallin proposed taking active steps in anticipation of our future errors, both to make businesses robust and to keep our species safe in an opaque future: incorporating safety margins, and continually questioning one’s assumptions. He concluded by arguing, provocatively, that indispensable to both goals is having fun.
The talk was organized by the student organization Harvard High Impact Philanthropy (HHIP).
So You Want to be a Technology Developer…
The roots of Skype go back to one email. If that email hadn’t been sent, the world today might be different. In general, technology development is not something that “just happens” — instead, it’s a result of particular actions by individual people. Moreover, the responsibility of technology developers must increase proportionally to the power of their creations. The talk sketches out a vision of what it means to be a responsible technology developer, using behind the scenes stories and videos from the early days of Skype development.
Jaan Tallinn, co-founder of Skype
Wednesday, October 30th
5:30 – 6:30 PM
Science Center A
RSVP to this event
The event is organized by Harvard HIP (High Impact Philanthropy).
The Economist has a long, detailed, and readable piece about the difficulties of inferring anything from the published findings of biomedical science. There are all sorts of problems that fall short of scientific fraud, including the the biases caused by industry-funding of biomedical science, the biases of unblinded raters who see what they want to see, and the biases of journal editors towards only publishing “positive” findings. (I am particularly enamored with this graphic, which shows the fundamental problem of inference.) It is rare for researchers to even bother to attempt to replicate prior findings, but when replications are attempted, they often fail.
The Economist piece can be read as something close to an outright assault on empiricism, at least as we now know it. In practical terms, it is prudent for physicians, patients, and payors to be wary of the findings presented in even the top journals.
One of the beauties of our scientific system is that it is wildly decentralized. Scientists (and their funders) can test any hypothesis that they find interesting, and they can use whatever methods they prefer. Likewise, journal editors can publish whatever they want. While such academic and market freedom is attractive, it results in quite a hodgepodge of science, with replication studies and publication of null results being afterthoughts. The NIH and NSF have in the past functioned to set an agenda and demand rigor, but as their funding wanes, the chaos waxes.
The problems are scientific, but any solution will be institutional (and thus legal). I have argued for a partial solution to industry bias in my short article, called “The Money Blind: How to Stop Industry Influence in Biomedical Science Without Violating the First Amendment.” Independent scientific testing could be conducted by a neutral intermediary, which would pool funds. In a similar vein, there is also a new project of the Science Exchange, called “The Reproducibility Initiative.” This program offers to be the independent scientific agency, which attempts to validate known results. But there is not yet a large-scale funding model in place. If biomedical journal editors would at least put disclosures in their structured abstracts (an intervention we have tested), over the long run that may also nudge industry to use such gold-standard independent testing, when they have something that is truly provable. And, at least in the domain of the products regulated by the FDA, the agency should consider using its current statutory authority to push companies towards independent, robust, and replicated science.
The Edmond J. Safra Center for Ethics at Harvard University has organized a symposium on Institutional Corruption and Pharmaceutical Policy that will be published in the forthcoming issue of the Journal of Law, Medicine & Ethics, 2013: Vol. 14 (3). It will be published at the begining of September.
The goals of pharmaceutical policy and medical practice are often undermined due to institutional corruption — that is, widespread or systemic practices, usually legal, that undermine an institution’s objectives or integrity. The pharmaceutical industry’s own purposes are often undermined. In addition, pharmaceutical industry funding of election campaigns and lobbying skews the legislative process that sets pharmaceutical policy. Moreover, certain practices have corrupted medical research, the production of medical knowledge, the practice of medicine, drug safety, and the Food and Drug Administration’s oversight of pharmaceutical marketing.
Marc Rodwin invited a group of scholars to analyze these issues, with each author taking a different look at the sources of corruption, how it occurs and what is corrupted. The articles address five topics: (1) systemic problems, (2) medical research, (3) medical knowledge and practice, (4) marketing, and (5) patient advocacy organizations.
For more information on the symposium, including a full list of the articles, please visit the Safra Center’s website. You can also access advanced copies of the 16 symposium articles through SSRN online.
For a summary of each article and the key themes in the symposium see, Marc Rodwin, Institutional Corruption and Pharmaceutical Policy.
Update: In other SUPPORT news today, a second group of bioethicists has written to the NEJM in, ahem, support of OHRP’s original criticisms of the SUPPORT trial. Readers may recall that another group of prominent bioethicists had previously published a letter in the NEJM in support of SUPPORT.
OHRP today announced details of the public meeting it previously said it would convene to address the SUPPORT trial and similar trials comparing two or more standard-of-care interventions in which subjects are randomized.
From an OHRP email:
On June 26, 2013, the Department of Health and Human Services (HHS) announced in the Federal Register an August 28, 2013 public meeting to seek public input and comment on how certain provisions of the Federal policy for the protection of human subjects should be applied to research studying one or more interventions which are used as standard of care treatment in the non-research context.
HHS specifically requests input regarding how an institutional review board (IRB) should assess the risks of research involving randomization to one or more treatments within the standard of care for particular interventions, and what reasonably foreseeable risks of the research should be disclosed to research subjects in the informed consent process.
HHS is seeking participation in the meeting and written comments from all interested parties, including, but not limited to, IRB members, IRB staff, institutional officials, research institutions, investigators, research subject advocacy groups, ethicists, and the regulated community at large. The meeting and the written comments are intended to assist HHS, through the Office for Human Research Protections (OHRP), Office of the Assistant Secretary for Health (OASH), in developing guidance regarding what constitutes reasonably foreseeable risk in research involving standard of care interventions such that the risk is required to be disclosed to research subjects. HHS is seeking input on a number of specific questions but is interested in any other pertinent information participants in the public meeting would like to share.
More details and deadlines after the jump.
UPDATE: A class action lawsuit has been filed in federal court against UAB providers and IRB members on behalf of infants enrolled in the SUPPORT study (through their parents). The Amended Complaint, which was filed May 20, can be found here. In addition, here are two more sets of reactions to the SUPPORT study in the NEJM, both in defense of it, from a group of prominent bioethicists and from NIH. Here is a new post from John Lantos at the Hasting Center’s Bioethics Forum blog. And here is coverage of the most recent developments in the New York Times. I’ll continue to aggregate links as warranted.
Regular readers may recall that recently, OHRP sent a determination letter to one of multiple sites (the University of Alabama at Birmingham (UAB)) involved in an RCT (the SUPPORT study) of optimal oxygen levels for premature infants (prior coverage here, here, and here). OHRP’s criticism itself led to considerable criticism among many research ethicists and physician-researchers (see, e.g., here, here, and here), as well as the SUPPORT researchers themselves (here), while others defended OHRP to varying degrees (here, here, and here).
Now, in a new letter to UAB, OHRP clarified that it has no objections to the study design; its objections, instead, pertain to what parents were told in the informed consent documents. Then, in a remarkable move, it announced that it is suspending its compliance actions against UAB, and plans no further action vis-a-vis other SUPPORT sites, pending its issuance of new guidance to address the risks that must be disclosed when conducted clinical trials like SUPPORT. OHRP promises not only the usual notice and comment period following the draft guidance but also an open public meeting, presumably in advance of the draft.
As the OHRP letter itself suggests, the fight within the research ethics community over the SUPPORT study can be seen as part of a larger conversation about the future of human subjects research regulation in the learning healthcare system. OHRP’s guidance-making process in this matter will clearly be one to watch.
I’ve been thinking a lot lately about how our society regulates the integrity of scientific research in an era of fierce competition for diminishing grants and ultracompetitive academic appointments. When I shared a draft paper on this topic a few weeks ago, several colleagues urged me to think more about the role played by academic journals, so I was interested to see this article in Nature last week about a recently uncovered criminal scam defrauding two European science journals and countless would-be authors. It caught my attention because it seems to belie the notion that the journals and the honest scientific community are sophisticated enough actors to be trusted to root out the fabrication, falsification, and plagiarism that constitute “research misconduct” under Federal law. Needless to say, it takes a different kind of expertise to discern scientific misconduct than to uncover a more mundane phishing scam like the one these cons were running, but the anecdote stands as a nice reminder of the fallibility even of great minds.