A row of colored medical records folders

The Troubling Prevalence of Medical Record Errors

With plenty of potential healthcare concerns and complications arising out of medical diagnoses and treatments themselves, errors in medical records present an unfortunate additional opportunity for improper treatment.

A recent article from Kaiser Health News (KHN) discussed several examples of dangerous medical record errors: a hospital pathology report identifying cancer that failed to reach the patient’s neurosurgeon, a patient whose record incorrectly identified her as having an under-active rather than overactive thyroid, potentially subjecting her to harmful medicine, and a patient who discovered pages someone else’s medical records tucked into in her father’s records. In addition to incorrect information, omitting information on medications, allergies, and lab results from a patient’s records can be quite dangerous.

The goal of “one patient, one record” provides a way to “bring patient records and data into one centralized location that all clinicians will be able to access as authorized.” This enables providers to better understand the full picture of a patient’s medical condition. It also minimizes the number of questions, and chances of making errors, that a patient must answer regarding their medical conditions and history when they visit a provider.

Other benefits, such as cost and care coordination, also add to the appeal of centralized records.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

doctor wearing gloves holding sperm sample in test tube while writing on clipboard

When Fertility Doctors Use Their Own Sperm, and Families Don’t Find Out for Decades

An Idaho U.S. District Court ruled this week that parents can provisionally sue the fertility doctor who, in 1980, used his own sperm to create their daughter—just so long as their claims aren’t barred by the many years that have passed since the alleged misconduct that DNA tests substantiate. The daughter—now almost 40—discovered the fraud when she tested her ancestry with a mail-order DNA kit.

The facts are scandalous—but not unique. A handful of similar cases have recently come to light.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data-driven Medicine Needs a New Profession: Health Information Counseling

By Barbara Prainsack, Alena Buyx, and Amelia Fiske

Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?

While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.

It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.

The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.

Read More

Compulsory Genetic Testing for Refugees: No Thanks

By Gali Katznelson

lab worker testing dna
DNA tests are not perfect and they can be vulnerable to manipulation. The UNHCR says genetic testing is an invasion of privacy. (Photo by andjic/Thinkstock)

Recent reports claim that Attorney General Jeff Sessions is considering using genetic testing to confirm the relationships of children who enter the country with adults to determine if they share a genetic relationship.

The website the Daily Caller reported that Sessions suggested in a radio interview that the government might undertake genetic testing of refugees and migrants in an effort to prevent fraud and human trafficking.

This proposal is problematic, not only because DNA testing is unreliable and vulnerable to hacking, it is also an invasion of privacy and flies in the face of guidelines from the United Nations’ refugee agency.

Read More

Prescription Monitoring Programs: HIPAA, Cybersecurity and Privacy

By Stephen P. Wood

Privacy, especially as it relates to healthcare and protecting sensitive medical information, is an important issue. The Health Insurance Portability and Accountability Act, better know as HIPAA, is a legislative action that helps to safeguard personal medical information. This protection is afforded to individuals by the Privacy Rule, which dictates who can access an individual’s medical records, and the Security Rule, which ensures that electronic medical records are protected.

Access to someone’s healthcare records by a medical provider typically requires a direct health care-related relationship with the patient in question. For example, if you have a regular doctor, that doctor can access your medical records. Similarly, if you call your doctor’s office off-hours, the covering doctor, whom may have no prior relationship with you, may similarly access these records. The same holds true if you go to the emergency department or see a specialist. No provider should be accessing protected information however, without a medical need.

Read More

DNA Donors Must Demand Stronger Privacy Protection

By Mason Marks and Tiffany Li

An earlier version of this article was published in STAT.

The National Institutes of Health wants your DNA, and the DNA of one million other Americans, for an ambitious project called All of Us. Its goal — to “uncover paths toward delivering precision medicine” — is a good one. But until it can safeguard participants’ sensitive genetic information, you should decline the invitation to join unless you fully understand and accept the risks.

DNA databases like All of Us could provide valuable medical breakthroughs such as identifying new disease risk factors and potential drug targets. But these benefits could come with a high price: increased risk to individuals’ genetic data privacy, something that current U.S. laws do not adequately protect. Read More

Facebook Should ‘First Do No Harm’ When Collecting Health Data

By Mason Marks

Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.

Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.

Read More

The Opioid Crisis Requires Evidence-Based Solutions, Part I: How the President’s Commission on Combating Drug Addiction Misinterpreted Scientific Studies

By Mason Marks

The opioid crisis kills at least 91 Americans each day and has far-reaching social and economic consequences for us all. As lawmakers explore solutions to the problem, they should ensure that new regulations are based on scientific evidence and reason rather than emotion or political ideology. Though emotions should motivate the creation of policies and legislation, solutions to the opioid epidemic should be grounded in empirical observation rather than feelings of anger, fear, or disgust. Legislators must be unafraid to explore bold solutions to the crisis, and some measured risks should be taken. In this three-part series on evidence-backed solutions to the opioid crisis, I discuss proposals under consideration by the Trump Administration including recent recommendations of the President’s Commission on Combating Drug Addiction and the Opioid Crisis. Though the Commission made some justifiable proposals, it misinterpreted the conclusions of scientific studies and failed to consider evidence-based solutions used in other countries. This first part of the series focuses on the misinterpretation of scientific data.

Last year more than 64,000 Americans died of drug overdose, which is “now the leading cause of death” in people under 50. Opioids are responsible for most of these deaths. By comparison, the National Safety Council estimates about 40,000 Americans died in auto crashes last year, and the Centers for Disease Control reports that 38,000 people were killed by firearms. Unlike deaths due to cars and firearms, which have remained relatively stable over the past few years, opioid deaths have spiked abruptly. Between 2002 and 2015, U.S. opioid-related deaths nearly tripled (from about 12,000 deaths in 2002 to over 33,000 in 2015). Last year, synthetic opioids such as fentanyl contributed to over 20,000 deaths and accounted for the sharpest increase in opioid fatalities (See blue line in Fig. 1 below). Read More

The CVS/Aetna Deal: The Promise in Data Integration

By Wendy Netter Epstein

Earlier this month, CVS announced plans to buy Aetna— one of the nation’s largest health insurers—in a $69 billion deal.  Aetna and CVS pitched the deal to the public largely on the promise of controlling costs and improving efficiency in their operations, which they say will inhere to the benefit of consumers. The media coverage since the announcement has largely focused on these claims, and in particular, on the question of whether this vertical integration will ultimately lower health care costs for consumers—or increase them.  There are both skeptics  and optimists.  A lot will turn on the effects of integrating Aetna’s insurance with CVS’s pharmacy benefit manager services.

But CVS and Aetna also flag another potential benefit that has garnered less media attention—the promise in combining their data.  CVS CEO Larry Merlo says that “[b]y integrating data across [their] enterprise assets and through the use of predictive analytics,” consumers (and patients) will be better off.  This claim merits more attention.  There are three key ways that Merlo might be right. Read More