Mushrooms containing psilocybin grow in the forest.

Washington Psilocybin Bill Would Legalize Supported Adult Use

By Mason Marks

On Tuesday, Washington State legislators filed SB 5660, a bill that would legalize the supported adult use of psilocybin by people 21 years of age and older.

Sponsored by Senators Jesse Salomon and Liz Lovelett, the bill, known as the Washington Psilocybin Wellness and Opportunity Act, includes many innovative features including a Social Opportunity Program to help address harms caused by the war on drugs, a provision to support small businesses, and accommodations for people with certain medical conditions to receive the psychedelic substance at home.

I had the privilege of helping to draft the Washington Psilocybin Services Wellness and Opportunity Act with input from the Psychedelic Medicine Alliance of Washington and my colleague John Rapp of the law firm Harris Bricken. We had previously collaborated on the psychedelic decriminalization resolution adopted unanimously by the Seattle City Council.

Read More

Traditional countryside scene in the Netherlands with windbreak lane of poplar trees in the wind under summer sky. Ens, Flevoland Province, the Netherlands.

Q&A with Mason Marks on New Psychedelics Law and Regulation Initiative

By Chloe Reichel

On June 30th, the Petrie-Flom Center announced the launch of a three-year research initiative, the Project on Psychedelics Law and Regulation (POPLAR), which is supported by a generous grant from the Saisei Foundation.

The Project on Psychedelics Law and Regulation at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School will advance evidence-based psychedelics law and policy.

In 2017, the FDA designated MDMA a breakthrough therapy for post-traumatic stress disorder, and in 2018 the agency recognized psilocybin as a breakthrough therapy for treatment-resistant depression. These designations indicate that psychedelics may represent substantial improvements over existing treatments for mental health conditions. Many other psychedelics, including ibogaine, ketamine, and dimethyltryptamine, are the focus of ongoing psychiatric research and commercialization efforts.

Despite the proliferation of clinical research centers and increasing private investment in psychedelic drug development, there is a relative lack of research on the ethical, legal, and social implications of psychedelics research, commerce, and therapeutics.

In the following interview, which has been edited and condensed, Senior Fellow and POPLAR Project Lead Mason Marks explains how POPLAR will fill this gap, and previews some of the initiative’s topics of inquiry.

Read More

man lying on couch.

Psychedelics and America: A Digital Symposium

By Mason Marks

In 2020, the psychedelics research and policy reform renaissance is in full swing. Prohibited by federal law since the 1970s, psychedelic substances can alter how people see themselves, the world, and those around them. Clinical trials suggest they may help people overcome ingrained thought patterns associated with depression, anxiety, and addiction.

Acknowledging their spiritual and therapeutic potential, universities have established new psychedelics research programs. The Food and Drug Administration (FDA) has deemed them breakthrough therapies for depression and post-traumatic stress disorder. This designation means they could be significant improvements over traditional treatments such as selective serotonin reuptake inhibitors (SSRIs). Accordingly, the FDA has put some psychedelics on an accelerated course toward approval. Eventually, they could help millions who have not benefitted from existing therapies.

However, despite their breakthrough status, psychedelics will not become FDA approved for several years. Meanwhile, the COVID-19 pandemic is making the country’s mental health crisis worse. According to the Centers for Disease Control and Prevention, rates of depression, anxiety, substance use, and suicidal thoughts have risen in the past nine months.

Read More

Dried psilocybe cubensis psilocybin magic mushrooms inside a plastic prescription medicine bottle isolated on white background.

As Cities Decriminalize Psychedelics, Law Enforcement Should Step Back

By Mason Marks

Amid rising rates of depression, suicide, and substance use disorders, drug makers have scaled back investment in mental health research. Psychedelics may fill the growing need for innovative psychiatric drugs, but federal prohibition prevents people from accessing their benefits. Nevertheless, some cities, dissatisfied with the U.S. war on drugs, are decriminalizing psychedelics.

In 2019, Denver became the first U.S. city to decriminalize mushrooms containing psilocybin, a psychedelic the FDA considers a breakthrough therapy for major depressive disorder (MDD) and treatment-resistant depression.

In a historic vote, Denver residents approved Ordinance 301, which made prosecuting adults who possess psilocybin-containing mushrooms for personal use the city’s “lowest law enforcement priority.” Since then, in Oakland and Santa Cruz, California, voters approved their own decriminalization measures.

As a Schedule I controlled substance, psilocybin remains illegal under federal law, and despite ongoing clinical trials, it is unlikely to become FDA approved for several years. Social distancing requirements due to COVID-19 are disrupting medical research causing further delays. But as the November election approaches, other U.S. cities prepare to vote on psychedelics.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

DNA Donors Must Demand Stronger Privacy Protection

By Mason Marks and Tiffany Li

An earlier version of this article was published in STAT.

The National Institutes of Health wants your DNA, and the DNA of one million other Americans, for an ambitious project called All of Us. Its goal — to “uncover paths toward delivering precision medicine” — is a good one. But until it can safeguard participants’ sensitive genetic information, you should decline the invitation to join unless you fully understand and accept the risks.

DNA databases like All of Us could provide valuable medical breakthroughs such as identifying new disease risk factors and potential drug targets. But these benefits could come with a high price: increased risk to individuals’ genetic data privacy, something that current U.S. laws do not adequately protect. Read More

Facebook Should ‘First Do No Harm’ When Collecting Health Data

By Mason Marks

Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.

Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.

Read More

Simulated Side Effects: FDA Uses Novel Computer Model to Guide Kratom Policy

By Mason Marks

FDA Commissioner Scott Gottlieb issued a statement on Tuesday about the controversial plant Mitragyna speciosa, which is also known as kratom. According to Gottlieb, kratom poses deadly health risks. His conclusion is partly based on a computer model that was announced in his recent statement. The use of simulations to inform drug policy is a new development with implications that extend beyond the regulation of kratom. We currently live in the Digital Age, a period in which most information is in digital form. However, the Digital Age is rapidly evolving into an Age of Algorithms in which computer software increasingly assumes the roles of human decision makers. The FDA’s use of computer simulations to evaluate drugs is a bold first step into this new era. This essay discusses the potential risks of basing federal drug policies on computer models that have not been thoroughly explained or validated (using the kratom debate as a case study).

Kratom grows naturally in Southeast Asian countries such as Thailand and Malaysia where it has been used for centuries as a stimulant and pain reliever. In recent years, the plant has gained popularity in the United States as an alternative to illicit and prescription narcotics. Kratom advocates claim it is harmless and useful for treating pain and easing symptoms of opioid withdrawal. However, the FDA contends it has no medical use and causes serious or fatal complications. As a result, the US Drug Enforcement Agency (DEA) may categorize kratom in Schedule I, its most heavily restricted category.

Read More

The Opioid Crisis Requires Evidence-Based Solutions, Part III: How the President’s Commission on Combating Drug Addiction Dismissed Harm Reduction Strategies

By Mason Marks

Drug overdose is a leading cause of death in Americans under 50. Opioids are responsible for most drug-related deaths killing an estimated 91 people each day. In Part I of this three-part series, I discuss how the President’s Commission on Combatting Drug Addiction and the Opioid Crisis misinterpreted scientific studies and used data to support unfounded conclusions. In Part II I explore how the Commission dismissed medical interventions used successfully in the U.S. and abroad such as kratom and ibogaine. In this third part of the series, I explain how the Commission ignored increasingly proven harm reduction strategies such as drug checking and safe injection facilities (SIFs).

In its final report released November 1, 2017, the President’s Commission acknowledged that “synthetic opioids, especially fentanyl analogs, are by far the most problematic substances because they are emerging as a leading cause of opioid overdose deaths in the United States.” While speaking before the House Oversight Committee last month, the Governor of Maryland Larry Hogan stated that of the 1180 overdose deaths in his state this year, 850 (72%) were due to synthetic opioids. Street drugs are often contaminated with fentanyl and other synthetics. Dealers add them to heroin, and buyers may not be aware that they are consuming adulterated drugs. As a result, they can be caught off guard by their potency, which contributes to respiratory depression and death. Synthetic opioids such as fentanyl are responsible for the sharpest rise in opioid-related mortality (see blue line in Fig. 1 below). Read More