Along with our partners at the Edmond J. Safra Center for Ethics at Harvard University, the Petrie-Flom Center is thrilled to announce our new jointly hosted Fellow-in-Residence, Mason Marks.
Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.
Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.
Two parallel tracks of AI-based suicide prediction have emerged.
The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.
My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.
An earlier version of this article was published in STAT.
The National Institutes of Health wants your DNA, and the DNA of one million other Americans, for an ambitious project called All of Us. Its goal — to “uncover paths toward delivering precision medicine” — is a good one. But until it can safeguard participants’ sensitive genetic information, you should decline the invitation to join unless you fully understand and accept the risks.
DNA databases like All of Us could provide valuable medical breakthroughs such as identifying new disease risk factors and potential drug targets. But these benefits could come with a high price: increased risk to individuals’ genetic data privacy, something that current U.S. laws do not adequately protect. Read More
By Mason Marks
Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.
Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.
By Mason Marks
FDA Commissioner Scott Gottlieb issued a statement on Tuesday about the controversial plant Mitragyna speciosa, which is also known as kratom. According to Gottlieb, kratom poses deadly health risks. His conclusion is partly based on a computer model that was announced in his recent statement. The use of simulations to inform drug policy is a new development with implications that extend beyond the regulation of kratom. We currently live in the Digital Age, a period in which most information is in digital form. However, the Digital Age is rapidly evolving into an Age of Algorithms in which computer software increasingly assumes the roles of human decision makers. The FDA’s use of computer simulations to evaluate drugs is a bold first step into this new era. This essay discusses the potential risks of basing federal drug policies on computer models that have not been thoroughly explained or validated (using the kratom debate as a case study).
Kratom grows naturally in Southeast Asian countries such as Thailand and Malaysia where it has been used for centuries as a stimulant and pain reliever. In recent years, the plant has gained popularity in the United States as an alternative to illicit and prescription narcotics. Kratom advocates claim it is harmless and useful for treating pain and easing symptoms of opioid withdrawal. However, the FDA contends it has no medical use and causes serious or fatal complications. As a result, the US Drug Enforcement Agency (DEA) may categorize kratom in Schedule I, its most heavily restricted category.
By Mason Marks
Drug overdose is a leading cause of death in Americans under 50. Opioids are responsible for most drug-related deaths killing an estimated 91 people each day. In Part I of this three-part series, I discuss how the President’s Commission on Combatting Drug Addiction and the Opioid Crisis misinterpreted scientific studies and used data to support unfounded conclusions. In Part II I explore how the Commission dismissed medical interventions used successfully in the U.S. and abroad such as kratom and ibogaine. In this third part of the series, I explain how the Commission ignored increasingly proven harm reduction strategies such as drug checking and safe injection facilities (SIFs).
In its final report released November 1, 2017, the President’s Commission acknowledged that “synthetic opioids, especially fentanyl analogs, are by far the most problematic substances because they are emerging as a leading cause of opioid overdose deaths in the United States.” While speaking before the House Oversight Committee last month, the Governor of Maryland Larry Hogan stated that of the 1180 overdose deaths in his state this year, 850 (72%) were due to synthetic opioids. Street drugs are often contaminated with fentanyl and other synthetics. Dealers add them to heroin, and buyers may not be aware that they are consuming adulterated drugs. As a result, they can be caught off guard by their potency, which contributes to respiratory depression and death. Synthetic opioids such as fentanyl are responsible for the sharpest rise in opioid-related mortality (see blue line in Fig. 1 below). Read More
By Mason Marks
Last year more than 64,000 Americans died of drug overdose, which is “now the leading cause of death” in people under 50. Opioids kill an estimated 91 Americans each day and are responsible for most drug-related deaths in the US. This public health crisis requires solutions that are supported by science and reason instead of emotion and political ideology. In Part I of this three-part series, I discuss how the President’s Commission on Combating Drug Addiction and the Opioid Crisis misinterpreted scientific studies and used data to support unfounded conclusions. In this second part of the series, I explore how the Opioid Commission ignored medical interventions that are used successfully in the U.S. and abroad. In Part III, I will discuss non-medical interventions such as drug checking and safe injection sites. The Commission’s failure to consider these options is likely driven by emotions such as fear and disgust rather than a careful review of scientific evidence.
Medical marijuana is currently accepted in 29 U.S. states and the District of Columbia. It is also permitted in at least 10 countries. However, the Opioid Commission outright rejected calls to consider the use of medical marijuana as an alternative to opioids for managing pain. Prior to the Commission’s first meeting, it solicited input from industry and members of the public on how to address the opioid crisis. In response, it received over 8,000 public comments. According to VICE News, which obtained the documents by submitting a Freedom of Information Act (FOIA) request, most comments were submitted by individuals urging the Commission to “consider medical marijuana as a solution to the opioid epidemic.” A spokesman for the Office of National Drug Control Policy, a body of the Executive Branch that provides administrative support to the Opioid Commission, reports receiving “more than 7,800 public comments relating to marijuana.” Despite these comments, in its final report, the Commission dismissed the notion that marijuana should play a role in treating chronic pain and opioid addiction. Its report cited a recent study from the American Journal of Psychiatry, which concluded that marijuana use was associated with an increased risk of opioid abuse. However, this study relied on data that was collected over twelve years ago. One of its authors, Columbia Medical School Professor Mark Olfson, told CNN that if the data were collected today, they could yield different results.
By Mason Marks
The opioid crisis kills at least 91 Americans each day and has far-reaching social and economic consequences for us all. As lawmakers explore solutions to the problem, they should ensure that new regulations are based on scientific evidence and reason rather than emotion or political ideology. Though emotions should motivate the creation of policies and legislation, solutions to the opioid epidemic should be grounded in empirical observation rather than feelings of anger, fear, or disgust. Legislators must be unafraid to explore bold solutions to the crisis, and some measured risks should be taken. In this three-part series on evidence-backed solutions to the opioid crisis, I discuss proposals under consideration by the Trump Administration including recent recommendations of the President’s Commission on Combating Drug Addiction and the Opioid Crisis. Though the Commission made some justifiable proposals, it misinterpreted the conclusions of scientific studies and failed to consider evidence-based solutions used in other countries. This first part of the series focuses on the misinterpretation of scientific data.
Last year more than 64,000 Americans died of drug overdose, which is “now the leading cause of death” in people under 50. Opioids are responsible for most of these deaths. By comparison, the National Safety Council estimates about 40,000 Americans died in auto crashes last year, and the Centers for Disease Control reports that 38,000 people were killed by firearms. Unlike deaths due to cars and firearms, which have remained relatively stable over the past few years, opioid deaths have spiked abruptly. Between 2002 and 2015, U.S. opioid-related deaths nearly tripled (from about 12,000 deaths in 2002 to over 33,000 in 2015). Last year, synthetic opioids such as fentanyl contributed to over 20,000 deaths and accounted for the sharpest increase in opioid fatalities (See blue line in Fig. 1 below). Read More
By Mason Marks
In this brief essay, I describe a new type of medical information that is not protected by existing privacy laws. I call it Emergent Medical Data (EMD) because at first glance, it has no relationship to your health. Companies can derive EMD from your seemingly benign Facebook posts, a list of videos you watched on YouTube, a credit card purchase, or the contents of your e-mail. A person reading the raw data would be unaware that it conveys any health information. Machine learning algorithms must first massage the data before its health-related properties emerge.
Unlike medical information obtained by healthcare providers, which is protected by the Health Information Portability and Accountability Act (HIPAA), EMD receives little to no legal protection. A common rationale for maintaining health data privacy is that it promotes full transparency between patients and physicians. HIPAA assures patients that the sensitive conversations they have with their doctors will remain confidential. The penalties for breaching confidentiality can be steep. In 2016, the Department of Health and Human Services recorded over $20 million in fines resulting from HIPAA violations. When companies mine for EMD, they are not bound by HIPAA or subject to these penalties.
Mason Marks is joining Bill of Health as a regular contributor.
Mason is a Visiting Fellow at Yale Law School’s Information Society Project. His research focuses on the application of artificial intelligence to clinical decision making in healthcare. He is particularly interested in the regulation of machine learning and obstacles to its adoption by the medical community. His secondary interests include data privacy and the regulation of emerging technologies such as 3D-bioprinting, surgical robotics, and genome editing.
Mason received his J.D. from Vanderbilt Law School. He is a member of the California Bar and practices intellectual property law in the San Francisco Bay Area. He has represented clients in the biotechnology, pharmaceutical, and medical device industries. Prior to law school, he received his M.D. from Tufts University and his B.A. in biology from Amherst College.
Representative Publications: Read More