Reality star Kim Kardashian at the CFDA Awards at the Brooklyn Museum on June 4, 2018.

Can Kim Kardashian Help Bioethics? Celebrity Data Breaches and Software for Moral Reflection

In 2013, Kim Kardashian entered Cedars-Sinai Medical Center in Los Angeles.

During her hospitalization, unauthorized hospital personnel accessed Kardashian’s medical record more than fourteen times. Secret “leaks” of celebrities’ medical information had, unfortunately, become de rigueur. Similar problems befell Prince, Farah Fawcett, and perhaps most notably, Michael Jackson, whose death stoked a swelling media frenzy around his health. While these breaches may seem minor, patient privacy is ethically important, even for the likes of the Kardashians.

Since 2013, however, a strange thing has happened.

Across hospitals both in the U.S. and beyond, snooping staff now encounter something curious. Through software, staff must now “Break the Glass” (BTG) to access the records of patients that are outside their circle of care, and so physicians unassociated with Kim Kardashian’s care of must BTG to access her files.

As part of the BTG process, users are prompted to provide a reason why they want to access a file. Read More

HIPAA is the Tip of the Iceberg When it Comes to Privacy and Your Medical Data

Big data continues to reshape health. For patient privacy, however, the exponential increase in the amount of data related to patient health raises major ethical and legal challenges.

In a new paper in Nature Medicine, “Privacy in the age of medical big data,” legal and bioethical experts W. Nicholson Price and I. Glenn Cohen examine the ways in which big data challenges the protection (and the way we conceive) of health care privacy. Read More

A row of colored medical records folders

The Troubling Prevalence of Medical Record Errors

With plenty of potential healthcare concerns and complications arising out of medical diagnoses and treatments themselves, errors in medical records present an unfortunate additional opportunity for improper treatment.

A recent article from Kaiser Health News (KHN) discussed several examples of dangerous medical record errors: a hospital pathology report identifying cancer that failed to reach the patient’s neurosurgeon, a patient whose record incorrectly identified her as having an under-active rather than overactive thyroid, potentially subjecting her to harmful medicine, and a patient who discovered pages someone else’s medical records tucked into in her father’s records. In addition to incorrect information, omitting information on medications, allergies, and lab results from a patient’s records can be quite dangerous.

The goal of “one patient, one record” provides a way to “bring patient records and data into one centralized location that all clinicians will be able to access as authorized.” This enables providers to better understand the full picture of a patient’s medical condition. It also minimizes the number of questions, and chances of making errors, that a patient must answer regarding their medical conditions and history when they visit a provider.

Other benefits, such as cost and care coordination, also add to the appeal of centralized records.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

doctor wearing gloves holding sperm sample in test tube while writing on clipboard

When Fertility Doctors Use Their Own Sperm, and Families Don’t Find Out for Decades

An Idaho U.S. District Court ruled this week that parents can provisionally sue the fertility doctor who, in 1980, used his own sperm to create their daughter—just so long as their claims aren’t barred by the many years that have passed since the alleged misconduct that DNA tests substantiate. The daughter—now almost 40—discovered the fraud when she tested her ancestry with a mail-order DNA kit.

The facts are scandalous—but not unique. A handful of similar cases have recently come to light.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data-driven Medicine Needs a New Profession: Health Information Counseling

By Barbara Prainsack, Alena Buyx, and Amelia Fiske

Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?

While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.

It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.

The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.

Read More

Compulsory Genetic Testing for Refugees: No Thanks

By Gali Katznelson

lab worker testing dna
DNA tests are not perfect and they can be vulnerable to manipulation. The UNHCR says genetic testing is an invasion of privacy. (Photo by andjic/Thinkstock)

Recent reports claim that Attorney General Jeff Sessions is considering using genetic testing to confirm the relationships of children who enter the country with adults to determine if they share a genetic relationship.

The website the Daily Caller reported that Sessions suggested in a radio interview that the government might undertake genetic testing of refugees and migrants in an effort to prevent fraud and human trafficking.

This proposal is problematic, not only because DNA testing is unreliable and vulnerable to hacking, it is also an invasion of privacy and flies in the face of guidelines from the United Nations’ refugee agency.

Read More

Prescription Monitoring Programs: HIPAA, Cybersecurity and Privacy

By Stephen P. Wood

Privacy, especially as it relates to healthcare and protecting sensitive medical information, is an important issue. The Health Insurance Portability and Accountability Act, better know as HIPAA, is a legislative action that helps to safeguard personal medical information. This protection is afforded to individuals by the Privacy Rule, which dictates who can access an individual’s medical records, and the Security Rule, which ensures that electronic medical records are protected.

Access to someone’s healthcare records by a medical provider typically requires a direct health care-related relationship with the patient in question. For example, if you have a regular doctor, that doctor can access your medical records. Similarly, if you call your doctor’s office off-hours, the covering doctor, whom may have no prior relationship with you, may similarly access these records. The same holds true if you go to the emergency department or see a specialist. No provider should be accessing protected information however, without a medical need.

Read More

DNA Donors Must Demand Stronger Privacy Protection

By Mason Marks and Tiffany Li

An earlier version of this article was published in STAT.

The National Institutes of Health wants your DNA, and the DNA of one million other Americans, for an ambitious project called All of Us. Its goal — to “uncover paths toward delivering precision medicine” — is a good one. But until it can safeguard participants’ sensitive genetic information, you should decline the invitation to join unless you fully understand and accept the risks.

DNA databases like All of Us could provide valuable medical breakthroughs such as identifying new disease risk factors and potential drug targets. But these benefits could come with a high price: increased risk to individuals’ genetic data privacy, something that current U.S. laws do not adequately protect. Read More

Facebook Should ‘First Do No Harm’ When Collecting Health Data

By Mason Marks

Following the Cambridge Analytica scandal, it was reported that Facebook planned to partner with medical organizations to obtain health records on thousands of users. The plans were put on hold when news of the scandal broke. But Facebook doesn’t need medical records to derive health data from its users. It can use artificial intelligence tools, such as machine learning, to infer sensitive medical information from its users’ behavior. I call this process mining for emergent medical data (EMD), and companies use it to sort consumers into health-related categories and serve them targeted advertisements. I will explain how mining for EMD is analogous to the process of medical diagnosis performed by physicians, and companies that engage in this activity may be practicing medicine without a license.

Last week, Facebook CEO Mark Zuckerberg testified before Congress about his company’s data collection practices. Many lawmakers that questioned him understood that Facebook collects consumer data and uses it to drive targeted ads. However, few Members of Congress seemed to understand that the value of data often lies not in the information itself, but in the inferences that can be drawn from it. There are numerous examples that illustrate how health information is inferred from the behavior of social media users: Last year Facebook announced its reliance on artificial intelligence to predict which users are at high risk for suicide; a leaked document revealed that Facebook identified teens feeling “anxious” and “hopeless;” and data scientists used Facebook messages and “likes” to predict whether users had substance use disorders. In 2016, researchers analyzed Instagram posts to predict whether users were depressed. In each of these examples, user data was analyzed to sort people into health-related categories.

Read More