Image of a young woman sitting in her bedroom in workout clothes checking a smart watch health app

Do You Know the Terms and Conditions of Your Health Apps? HIPAA, Privacy and the Growth of Digital Health

As more health care is being provided virtually through apps and web-based services, there is a need to take a closer look at whether users are fully aware of what they are consenting to, as it relates to their health information.

There needs to be a re-evaluation of how health apps obtain consent. At the same time, digital health offers an important opportunity to embolden privacy practices in digital platforms. We ought to use this important opportunity. Read More

Handcuffs on a pile of pills

Emergency Department Psychiatric Holds: A Form of Medical Incarceration?

Wait times and length of stay in emergency departments are a hot topic and often result in a variety of identifiable harms that include medical error and failures to meet quality care measures. Patients with psychiatric conditions, including suicidal ideations, risk for harm to others, or psychosis, are particularly vulnerable to increased emergency department (ED) lengths of stay. The length of ED holds for psychiatric patients can be three-fold that of similar holds for medical patients. Lack of access to appropriate care, comorbid medical illness, or violent behavior can all contribute to this.

Increased length of stay impacts the efficiency of the ED itself, increasing wait times, utilizing human resources and physical space. It has a more important impact, however, on the patient. Patients may be held in a small room with constant observation for days with little or no access to natural light, bathing facilities or contact with family or friends. They may be dressed in paper gowns, told when to eat, when to sleep and confined to their room for days at a time, emulating the conditions in a maximum security prison. Emergency Departments, through no fault of their own, are becoming holding cells for patients who are both vulnerable and often marginalized.

Read More

ONC’s Proposed Rule is a Breakthrough in Patient Empowerment

By Adrian Gropper

Imagine solving wicked problems of patient matching, consent, and a patient-centered longitudinal health record while also enabling a world of new healthcare services for patients and physicians to use. The long-awaited Notice of Proposed Rulemaking (NPRM) on information blocking from the Office of the National Coordinator for Health Information Technology (ONC) promises nothing less. 

Having data automatically follow the patient is a laudable goal but difficult for reasons of privacy, security, and institutional workflow. The privacy issues are clear if you use surveillance as the mechanism to follow the patient. Do patients know they’re under surveillance? By whom? Is there one surveillance agency or are there dozens in real-world practice? Can a patient choose who does the surveillance and which health encounters, including behavioral health, social relationships, location, and finance are excluded from the surveillance? Read More

Reality star Kim Kardashian at the CFDA Awards at the Brooklyn Museum on June 4, 2018.

Can Kim Kardashian Help Bioethics? Celebrity Data Breaches and Software for Moral Reflection

In 2013, Kim Kardashian entered Cedars-Sinai Medical Center in Los Angeles.

During her hospitalization, unauthorized hospital personnel accessed Kardashian’s medical record more than fourteen times. Secret “leaks” of celebrities’ medical information had, unfortunately, become de rigueur. Similar problems befell Prince, Farah Fawcett, and perhaps most notably, Michael Jackson, whose death stoked a swelling media frenzy around his health. While these breaches may seem minor, patient privacy is ethically important, even for the likes of the Kardashians.

Since 2013, however, a strange thing has happened.

Across hospitals both in the U.S. and beyond, snooping staff now encounter something curious. Through software, staff must now “Break the Glass” (BTG) to access the records of patients that are outside their circle of care, and so physicians unassociated with Kim Kardashian’s care of must BTG to access her files.

As part of the BTG process, users are prompted to provide a reason why they want to access a file. Read More

HIPAA is the Tip of the Iceberg When it Comes to Privacy and Your Medical Data

Big data continues to reshape health. For patient privacy, however, the exponential increase in the amount of data related to patient health raises major ethical and legal challenges.

In a new paper in Nature Medicine, “Privacy in the age of medical big data,” legal and bioethical experts W. Nicholson Price and I. Glenn Cohen examine the ways in which big data challenges the protection (and the way we conceive) of health care privacy. Read More

A row of colored medical records folders

The Troubling Prevalence of Medical Record Errors

With plenty of potential healthcare concerns and complications arising out of medical diagnoses and treatments themselves, errors in medical records present an unfortunate additional opportunity for improper treatment.

A recent article from Kaiser Health News (KHN) discussed several examples of dangerous medical record errors: a hospital pathology report identifying cancer that failed to reach the patient’s neurosurgeon, a patient whose record incorrectly identified her as having an under-active rather than overactive thyroid, potentially subjecting her to harmful medicine, and a patient who discovered pages someone else’s medical records tucked into in her father’s records. In addition to incorrect information, omitting information on medications, allergies, and lab results from a patient’s records can be quite dangerous.

The goal of “one patient, one record” provides a way to “bring patient records and data into one centralized location that all clinicians will be able to access as authorized.” This enables providers to better understand the full picture of a patient’s medical condition. It also minimizes the number of questions, and chances of making errors, that a patient must answer regarding their medical conditions and history when they visit a provider.

Other benefits, such as cost and care coordination, also add to the appeal of centralized records.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

doctor wearing gloves holding sperm sample in test tube while writing on clipboard

When Fertility Doctors Use Their Own Sperm, and Families Don’t Find Out for Decades

An Idaho U.S. District Court ruled this week that parents can provisionally sue the fertility doctor who, in 1980, used his own sperm to create their daughter—just so long as their claims aren’t barred by the many years that have passed since the alleged misconduct that DNA tests substantiate. The daughter—now almost 40—discovered the fraud when she tested her ancestry with a mail-order DNA kit.

The facts are scandalous—but not unique. A handful of similar cases have recently come to light.

Read More

Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data-driven Medicine Needs a New Profession: Health Information Counseling

By Barbara Prainsack, Alena Buyx, and Amelia Fiske

Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?

While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.

It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.

The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.

Read More

Compulsory Genetic Testing for Refugees: No Thanks

By Gali Katznelson

lab worker testing dna
DNA tests are not perfect and they can be vulnerable to manipulation. The UNHCR says genetic testing is an invasion of privacy. (Photo by andjic/Thinkstock)

Recent reports claim that Attorney General Jeff Sessions is considering using genetic testing to confirm the relationships of children who enter the country with adults to determine if they share a genetic relationship.

The website the Daily Caller reported that Sessions suggested in a radio interview that the government might undertake genetic testing of refugees and migrants in an effort to prevent fraud and human trafficking.

This proposal is problematic, not only because DNA testing is unreliable and vulnerable to hacking, it is also an invasion of privacy and flies in the face of guidelines from the United Nations’ refugee agency.

Read More