Challenges for Investigators—Generating Reproducible Research Results

This post is part of a series on emerging research challenges and solutions. The introduction to the series is available here, and all posts in the series are available here.

By John P.A. Ioannidis, MD, DSc, C.F. Rehnborg Chair in Disease Prevention, Professor of Medicine, of Health Research and Policy, of Biomedical Data Science, and of Statistics, and Co-Director, Meta-Research Innovation Center at Stanford (METRICS), Stanford University

Generating reproducible research results is not an easy task. As discussions about a reproducibility crisis become more common and occasionally heated, investigators may feel intimidated or even threatened, caught in the middle of the reproducibility wars. Some feel that the mounting pressure to deliver (both quantity and quality) may be threatening the joy of doing science and even the momentum to explore bold ideas. However, this is a gross misunderstanding. The effort to understand the shortcomings of reproducibility in our work and to find ways to improve our research standards is not some sort of externally imposed police auditing. It is a grassroots movement that stems from scientists themselves who want to improve their work, including its validity, relevance, and utility.

As it has been clarified before, reproducibility of results is just one of many aspects of reproducibility. It is difficult to deal with it in isolation, without also considering reproducibility of methods and reproducibility of inferences. Reproducibility of methods is usually impossible to assess, because unfortunately the triplet of software, script/code, and complete raw data is hardly ever available in a complete functional form. Lack of reproducibility of inferences leads to debates, even when the evidence seems strong and well-rounded. Reproducibility of results, when considered in the context of these other two reproducibility components, is unevenly pursued across disciplines. Some fields like genetic epidemiology have long understood the importance of routinely incorporating replication as a sine qua non in their efforts. Others still consider replication as second-class, “me too” research. Nevertheless, it can be shown (see Ioannidis, Behavioral and Brain Sciences, in press), that in most circumstances replication has at least the same value—and often more value—than original discovery. However, this leads to the question: how do we reward and incentivize investigators to follow a reproducible research path?

Read More

Challenges for Journals—Encouraging Sound Science

This post is part of a series on emerging research challenges and solutions. The introduction to the series is available here, and all posts in the series are available here.

By Barbara A. Spellman, Professor of Law and Professor of Psychology, University of Virginia School of Law

Journals and scientists should be BFFs. But currently they are frenemies. Or, in adult-speak:

Journals play an important role in ensuring that the scientific enterprise is sound. Their most obvious function is to publish science—good science, science that has been peer-reviewed by experts and is of interest to a journal’s readership. But in fulfilling that mission, journals may provide incentives to scientists that undermine the quality of published science and distort the scientific record.

Journal policies certainly contributed to the replication crisis. As businesses, publishers (appropriately) want to make money; to do so they need people to buy, read, and cite their journals. To make that happen, editors seek articles that are novel, that confirm some new hypothesis, and that have clear results. Scientists know that editors want articles with these qualities. Accordingly, scientists may (knowingly or not) bias the scientific process to produce that type of result.

Read More

Systems Matter: Research Environments and Institutional Integrity

This post is part of a series on emerging research challenges and solutions. The introduction to the series is available here, and all posts in the series are available here.

By CK Gunsalus, Director, National Center for Professional and Research Ethics (NCPRE), University of Illinois Urbana-Champaign

We know what it takes for institutions and scholars to produce high-quality, high-integrity research, and yet we do not always act upon that knowledge. As far back as 1988, Paul J. Friedman described both the roots of systemic shortcoming and approaches for conducting trustworthy research. Despite a clear understanding of the issues and steps that would improve our research and educational environments, the academy continues to be dogged by those same systemic issues. A recent National Academies of Sciences, Engineering and Medicine consensus study, Fostering Integrity in Research, in which I participated as a panel member, explores that same disconnect and makes recommendations. The bottom line is this: we must shift our attention and energy away from individual bad actors—though they exist and must be addressed—and toward the highly complex ecosystem within which research is conducted.

An update of an earlier appraisal published 1992, the 2017 NASEM report describes the transformation of research through advances in technology, globalization, increased interdisciplinarity, growing competition, and multiplying policy applications. It identifies six core values underlying research integrity—objectivity, openness, accountability, honesty, fairness and stewardship—and outlines best practices, including checklists, for all aspects of the research enterprise. I encourage you to read it and use these tools in your own work.

All the reports in the world won’t improve research integrity, however, if we don’t do the work in our institutions, departments, and research groups. There are many components to this effort, some of which are discussed in separate posts by my colleagues John P.A. Ioannidis and Barbara A. Spellman elsewhere in this symposium. Let’s focus here on institutional infrastructure.

Read More

Psychoneuroimmunology and the mind’s impact on health

If you are a skier like me, you likely revelled in watching the alpine skiing events during this years’ Olympic Winter Games held in Pyeongchang, South Korea. Having raced myself when I was younger, I recall the feeling of being in the starting gate with all the anticipation and excitement it brings. But my memories are more than mere recollections of “images” in my head, for I also have vivid muscle memory, and when watching and cheering for Lindsey Vonn and Ted Ligety, I can literally feel my leg muscles contract as if I were on the course myself. Because I skied for so much of my life, my experience now as a spectator brings me back to the hardwired responses that I can call up even to this day in a very intuitive way simply by visualizing a course.

Researchers at Stanford have now corroborated what athletes and psychologists have long believed: that visualizing ourselves performing a task, such as skiing down a race course, or engaged in other routines, improves our performance and increases our success rate. The findings, reported by neuroscientists in Neuron, suggest that mental rehearsal prepares our minds for real-world action. Using a new tool called a brain-machine interface, the researchers have shown how mental learning translates into physical performance and offers a potentially new way to study and understand the mind.

Could this new tool assist us in replicating cognitive responses to real-world settings in a controlled environment? More studies will need to be carried out in order to further test these findings and better understand the results. And one potential point to take into account is that preforming a real action is different than performing the same task mentally via a brain-imaging interface given that one’s muscles, skeletal system, and nervous system are all working in tandem; but, a brain-imaging interface would indeed seem to have very practical implications for those who use prosthetics or are who are paralyzed. As our knowledge of biomechanics and neuroscience advances, as well as our capabilities to interface the two, we may be able to utilize this technology to assist us in creating more life-like prosthetics and perhaps, harnessing the mind’s inborn processes and complex synapses, help others walk again.

Looking toward the future, another interesting subject of research would be to use a brain-imaging interface to study psychoneuroimmunology. We may not have the technology or ability to conduct such a study at the moment, but it seems plausible that in the near future we could develop the tools needed to conduct more rigorous research on the interactions between psychological processes and the nervous and immune systems. If visualizing winning a ski race improves our performance, why not also envisioning good health outcomes: resilient bodies, strong immune systems, plentiful and efficient white blood cells. Simply willing ourselves to health might not be possible, but, to be sure, having a positive outlook has been shown to impact the outcome of disease, while conversely, increased levels of fear and distress before surgery have been associated with worse outcomes. These are but a few examples of the increasing evidence of the mind’s impact on health. It highlights the importance of recognizing a holistic approach that considers the roles of behavior, mood, thought, and psychology in bodily homeostasis. Read More

Simulated Side Effects: FDA Uses Novel Computer Model to Guide Kratom Policy

By Mason Marks

FDA Commissioner Scott Gottlieb issued a statement on Tuesday about the controversial plant Mitragyna speciosa, which is also known as kratom. According to Gottlieb, kratom poses deadly health risks. His conclusion is partly based on a computer model that was announced in his recent statement. The use of simulations to inform drug policy is a new development with implications that extend beyond the regulation of kratom. We currently live in the Digital Age, a period in which most information is in digital form. However, the Digital Age is rapidly evolving into an Age of Algorithms in which computer software increasingly assumes the roles of human decision makers. The FDA’s use of computer simulations to evaluate drugs is a bold first step into this new era. This essay discusses the potential risks of basing federal drug policies on computer models that have not been thoroughly explained or validated (using the kratom debate as a case study).

Kratom grows naturally in Southeast Asian countries such as Thailand and Malaysia where it has been used for centuries as a stimulant and pain reliever. In recent years, the plant has gained popularity in the United States as an alternative to illicit and prescription narcotics. Kratom advocates claim it is harmless and useful for treating pain and easing symptoms of opioid withdrawal. However, the FDA contends it has no medical use and causes serious or fatal complications. As a result, the US Drug Enforcement Agency (DEA) may categorize kratom in Schedule I, its most heavily restricted category.

Read More

“Right to Try” Does Not Help Patients

Co-Blogged by Christopher Robertson and Kelly McBride Folkers (research associate at the Division of Medical Ethics of the NYU School of Medicine)

In 2014, Arizonans overwhelmingly voted in favor of a ballot referendum that claimed to allow terminally ill patients the “right to try” experimental drugs that have not yet been approved by the Food and Drug Administration (FDA). Despite the policy’s broad support, it has yet to help a single patient in Arizona obtain an experimental drug that they couldn’t have gotten before. Thirty-seven other states have also passed right to try bills, but likewise have seen little real impact for patients.

“Right to try” has moved to the federal stage, as the U.S. Senate unanimously passed such a bill last August without even holding a hearing. The House Energy & Commerce Subcommittee on Health considered the bill in an October hearing, but it failed to garner much enthusiasm among committee members. Vice President Mike Pence has advocated for a federal right to try law, and he recently met with FDA Commissioner Scott Gottlieb and House leadership to encourage pass of the bill this year. Read More

The Opioid Crisis Requires Evidence-Based Solutions, Part III: How the President’s Commission on Combating Drug Addiction Dismissed Harm Reduction Strategies

By Mason Marks

Drug overdose is a leading cause of death in Americans under 50. Opioids are responsible for most drug-related deaths killing an estimated 91 people each day. In Part I of this three-part series, I discuss how the President’s Commission on Combatting Drug Addiction and the Opioid Crisis misinterpreted scientific studies and used data to support unfounded conclusions. In Part II I explore how the Commission dismissed medical interventions used successfully in the U.S. and abroad such as kratom and ibogaine. In this third part of the series, I explain how the Commission ignored increasingly proven harm reduction strategies such as drug checking and safe injection facilities (SIFs).

In its final report released November 1, 2017, the President’s Commission acknowledged that “synthetic opioids, especially fentanyl analogs, are by far the most problematic substances because they are emerging as a leading cause of opioid overdose deaths in the United States.” While speaking before the House Oversight Committee last month, the Governor of Maryland Larry Hogan stated that of the 1180 overdose deaths in his state this year, 850 (72%) were due to synthetic opioids. Street drugs are often contaminated with fentanyl and other synthetics. Dealers add them to heroin, and buyers may not be aware that they are consuming adulterated drugs. As a result, they can be caught off guard by their potency, which contributes to respiratory depression and death. Synthetic opioids such as fentanyl are responsible for the sharpest rise in opioid-related mortality (see blue line in Fig. 1 below). Read More

Democratized Diagnostics: Why Medical Artificial Intelligence Needs Vetting

Pancreatic cancer is one of the deadliest illnesses out there.  The five-year survival rate of patients with the disease is only about 7%.  This is, in part, because few observable symptoms appear early enough for effective treatment.  As a result, by the time many patients are diagnosed the prognosis is poor.  There is an app, however, that is attempting to change that.  BiliScreen was developed by researchers at the University of Washington, and it is designed to help users identify pancreatic cancer early with an algorithm that analyzes selfies.  Users take photos of themselves, and the app’s artificially intelligent algorithm detects slight discolorations in the skin and eyes associated with early pancreatic cancer.

Diagnostic apps like BiliScreen represent a huge step forward for preventive health care.  Imagine a world in which the vast majority of chronic diseases are caught early because each of us has the power to screen ourselves on a regular basis.  One of the big challenges for the modern primary care physician is convincing patients to get screened regularly for diseases that have relatively good prognoses when caught early.

I’ve written before about the possible impacts of artificial intelligence and algorithmic medicine, arguing that both medicine and law will have to adapt as machine-learning algorithms surpass physicians in their ability to diagnose and treat disease.  These pieces, however, primarily consider artificially intelligent algorithms licensed to and used by medical professionals in hospital or outpatient settings.  They are about the relationship between a doctor and the sophisticated tools in her diagnostic toolbox — and about how relying on algorithms could decrease the pressure physicians feel to order unnecessary tests and procedures to avoid malpractice liability.  There was an underlying assumption that these algorithms had already been evaluated and approved for use by the physician’s institution, and that the physician had experience using them.  BiliScreen does not fit this mold — the algorithm is not a piece of medical equipment used by hospitals, but rather part of an app that could be downloaded and used by anyone with a smartphone.  Accordingly, apps like BiliScreen fall into a category of “democratized” diagnostic algorithms. While this democratization has the potential to drastically improve preventive care, it also has the potential to undermine the financial sustainability of the U.S. health care system.

Read More

The Problematic Patchwork of State Medical Marijuana Laws – New Research

By Abraham Gutman

The legal status of medical marijuana in the United States is unique. On one hand, the Controlled Substance Act of 1970 classifies marijuana as a Schedule I drug with no acceptable medical use and high potential for abuse. On the other hand, as of February 1, 2017, 27 states and the District of Columbia have passed laws authorizing the use of medical marijuana. This discrepancy between federal and state regulation has led to a wide variation in the ways that medical marijuana is regulated on the state level.

In a study published today in Addiction, our team of researchers from the Temple University Center for Public Health Law Research and the RAND Drug Policy Research Center finds that state laws mimic some aspects of federal prescription drug and controlled substances laws, and regulatory strategies used for alcohol, tobacco and traditional medicines.

In the past, studies on medical marijuana laws have focused on the spillover effect of medical marijuana to recreational use and not on whether the laws are regulating marijuana effectively as a medicine. Using policy surveillance methods to analyze the state of medical marijuana laws and their variations across states, this study lays the groundwork for future research evaluating the implementation, impacts, and efficacy of these laws.

The study focuses on three domains of medical marijuana regulation that were in effect as of February 1, 2017: patient protections and requirements, product safety, and dispensary regulation.

Here’s some of what we found:

Read More

The First Human Body Transplant – Ethical and Legal Considerations

By Ana S. Iltis, PhD

brain_glowingprofileTo what lengths should we go to preserve human life? This is a question many are asking after hearing that three men plan to make medical history by conducting the first human head transplant. Or, rather, whole body transplant. Italian neurosurgeon Dr. Sergio Canavero and Chinese surgeon Dr. Xiaoping Ren plan to provide a Russian volunteer, Valery Spiridonov, a new body. During the procedure, Spiridonov’s body and head would be detached and, with the help of a crane, surgeons would move the head and attach it to the donor body.  But is this ethical? What role might law and regulation play in monitoring them or in assessing their conduct after the fact?

Critics call the plan crazy, unethical, and sure to fail. The likelihood of success is very low and the risk of Spiridinov dying is high. Spiridonov says that as soon as animal studies confirm the possibility of survival, the risks will be worth taking. He has Werdnig-Hoffmann Disease, a genetic disorder that destroys muscle and nerve cells. He is confined to a wheelchair and has lived longer than expected. Body transplantation offers him the best chance at a life worth living. Read More