Do You Own Your Genetic Test Results? What About Your Temperature?

By Jorge L. Contreras

The popular direct-to-consumer genetic testing site AncestryDNA claims that “You always maintain ownership of your data.” But is this true?  And, if so, what does it mean?

For more than a century, US law has held that data – objective information and facts – cannot be owned as property. Nevertheless, in recent years there have been increasing calls to recognize property interests in individual health information. Inspired by high profile data breaches and skullduggery by Facebook and others, as well as ever more frequent stories of academic research misconduct and pharmaceutical industry profiteering, many bioethicists and patient advocates, seeking to bolster personal privacy and autonomy, have argued that property rights should be recognized in health data. In addition, a new crop of would-be data intermediaries (e.g., Nebula Genomics, Genos, Invitae, LunaDNA and Hu.manity.org) has made further calls to propertize health data, presumably to profit from acting as the go-betweens in what has been estimated to be a $60-$100 billion global market in health data. Read More

Nobody Reads the Terms and Conditions: A Digital Advanced Directive Might Be Our Solution

Could Facebook know your menstruation cycle?

In a recent Op-ed Piece, “You Just Clicked Yes. But, Do you Know Terms and Conditions of that Health App?,” I proposed that a mix of factors have given rise to the need to regulate web-based health services and apps. Since most of these applications do not fall under the Health Insurance Portability and Accountability Act (HIPAA), few people actually read through the Terms and Conditions, and also, the explosive growth of web-based health applications, the need for solutions is dire. Read More

What We Lost When We Lost Google ATEAC

By Joanna Bryson

In a few weeks, the Advanced Technology External Advisory Council (ATEAC) was scheduled to come together for its first meeting. At that meeting, we were expected to “stress test” a proposed face recognition technology policy. “We were going to dedicate an entire day to it” (at least 1/4 the time they expected to get out of us.) The people I talked to at Google seemed profoundly disturbed by what “face recognition” could do. It’s not the first time I’ve heard that kind of deep concern – I’ve also heard it in completely unrelated one-on-one settings from a very diverse set of academics whose only commonality was working at the interface of machine learning and human computer interaction  (HCI). It isn’t just face recognition. It’s body posture, acoustics of speech and laughter, the way a pen is used on a tablet, and (famously) text. Privacy isn’t over, but it will never again be present in society without serious, deliberate, coordinated defense. Read More

What Should Happen to our Medical Records When We Die?

By Jon Cornwall

In the next 200 years, at least 20 billion people will die. A good proportion of these people are going to have electronic medical records, and that begs the question: what are we going to do with all this posthumous medical data? Despite the seemingly logical and inevitable application of medical data from deceased persons for research and healthcare both now and in the future, the issue of how best to manage posthumous medical records is currently unclear.

Presently, large medical data sets do exist and have their own uses, though largely these are data sets containing ‘anonymous’ data. In the future, if medicine is to deliver on the promise of truly ‘personalized’ medicine, then electronic medical records will potentially have increasing value and relevance for our generations of descendants. This will, however, entail the public having to consider how much privacy and anonymity they are willing to part with in regard to information arising from their medical records. After all, enabling our medical records with the power to influence personalized medicine for our descendants cannot happen without knowing who we, or our descendants, actually are.  Read More

A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance

By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez

A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.

Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context.     Read More

Telemedicine. Image of a patient speaking to a doctor on a mobile phone.

Telemedicine Adds a Wrinkle to Latest New Mexico Legislative Debate on Aid in Dying

Then-Senate Majority Leader Bill Frist was roundly criticized in 2005 for declaring that Terri Schiavo, a Florida woman who had gone into cardiac arrest at age 26, was “not somebody in persistent vegetative state” after viewing videotapes of her condition. The tragic situation is mostly remembered as a low point for federalism and end of life policy.

But there is another issue stemming from the debate that ought to be considered. Although Frist backed away from calling his review of videos an actual diagnosis, it is interesting to think how the use of technology to make a remote determination of a patient’s condition has changed since Frist made his assessment.

Indeed, over a decade later, a New Mexico bill is proposing the opposite: allowing individuals with a terminal illness to utilize telemedicine consultations to seek aid to end their lives. It is not surprising that New Mexico lawmakers would consider telemedicine as part of their proposal. Given its geography, the state has embraced telemedicine as a means of expanding access, and innovative workforce initiatives such as Project ECHO were birthed there.

Read More

A row of colored medical records folders

The Troubling Prevalence of Medical Record Errors

With plenty of potential healthcare concerns and complications arising out of medical diagnoses and treatments themselves, errors in medical records present an unfortunate additional opportunity for improper treatment.

A recent article from Kaiser Health News (KHN) discussed several examples of dangerous medical record errors: a hospital pathology report identifying cancer that failed to reach the patient’s neurosurgeon, a patient whose record incorrectly identified her as having an under-active rather than overactive thyroid, potentially subjecting her to harmful medicine, and a patient who discovered pages someone else’s medical records tucked into in her father’s records. In addition to incorrect information, omitting information on medications, allergies, and lab results from a patient’s records can be quite dangerous.

The goal of “one patient, one record” provides a way to “bring patient records and data into one centralized location that all clinicians will be able to access as authorized.” This enables providers to better understand the full picture of a patient’s medical condition. It also minimizes the number of questions, and chances of making errors, that a patient must answer regarding their medical conditions and history when they visit a provider.

Other benefits, such as cost and care coordination, also add to the appeal of centralized records.

Read More

image of hands texting on a smart phone

Artificial Intelligence for Suicide Prediction

Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.

Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.

Two parallel tracks of AI-based suicide prediction have emerged.

The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.

My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.

Read More

millennial man at home sick with scarf and thermometer talking on the phone

The Millennial Need for Speed in Healthcare

According to a recent Kaiser Family Foundation (KFF) poll, shockingly large swaths of Americans have reported that they don’t have a primary care provider.

The July 2018 report found that 45 percent of 18-29 year olds, as well as 28 and 18 percent of 30-49 and 50-64 year olds, respectively, also lack designated primary care.

Kaiser Health News (KHN) explained that the price transparency, convenience, and speed of alternatives to office-based primary care physician (PCP) visits appear to be some of the preferences driving these patterns. Retail clinics, urgent care centers, and telemedicine websites satisfy many of these preferences, and are therefore appealing alternatives to scheduled appointments with a PCP. For example, extended hours and shorter wait times at increasingly widespread retail clinics have attracted young patients who want to avoid the hassle and wait times involved in scheduling and attending a traditional doctors office.

A 2015 PNC Healthcare survey similarly found that millennials saw their PCP significantly less (61 percent) than baby boomers and seniors (80 and 85 percent, respectively). The study emphasized the effects of technology on millennials’ trends in healthcare acquisition, such as higher utilization of online reviews to shop for doctors (such as Yelp). It also found that millennials are much more likely to prefer retail and acute care clinics, and are more likely to postpone treatment due to high costs than older generations.

Read More

concept of artificial intelligence, human brain with machinery

Four Roles for Artificial Intelligence in the Medical System

How will artificial intelligence (AI) change medicine?

AI, powered by “big data” in health, promises to transform medical practice, but specifics remain inchoate.  Reports that AI performs certain tasks at the level of specialists stoke worries that AI will “replace” physicians.  These worries are probably overblown; AI is unlikely to replace many physicians in the foreseeable future.  A more productive set of questions considers how AI and physicians should interact, including how AI can improve the care physicians deliver, how AI can best enable physicians to focus on the patient relationship, and how physicians should review the recommendations and predictions of AI.  Answering those questions requires clarity about the larger function of AI: not just what tasks AI can do or how it can do them, but what role it will play in the context of physicians, other patients, and providers within the overall medical system.

Medical AI can improve care for patients and improve the practice of medicine for providers—as long as its development is supported by an understanding of what role it can and should play.

Four different roles each have the possibility to be transformative for providers and patients: AI can push the frontiers of medicine; it can replicate and democratize medical expertise; it can automate medical drudgery; and it can allocate medical resources.

Read More