In a few weeks, the Advanced Technology External Advisory Council (ATEAC) was scheduled to come together for its first meeting. At that meeting, we were expected to “stress test” a proposed face recognition technology policy. “We were going to dedicate an entire day to it” (at least 1/4 the time they expected to get out of us.) The people I talked to at Google seemed profoundly disturbed by what “face recognition” could do. It’s not the first time I’ve heard that kind of deep concern – I’ve also heard it in completely unrelated one-on-one settings from a very diverse set of academics whose only commonality was working at the interface of machine learning and human computer interaction (HCI). It isn’t just face recognition. It’s body posture, acoustics of speech and laughter, the way a pen is used on a tablet, and (famously) text. Privacy isn’t over, but it will never again be present in society without serious, deliberate, coordinated defense. Read More
In the next 200 years, at least 20 billion people will die. A good proportion of these people are going to have electronic medical records, and that begs the question: what are we going to do with all this posthumous medical data? Despite the seemingly logical and inevitable application of medical data from deceased persons for research and healthcare both now and in the future, the issue of how best to manage posthumous medical records is currently unclear.
Presently, large medical data sets do exist and have their own uses, though largely these are data sets containing ‘anonymous’ data. In the future, if medicine is to deliver on the promise of truly ‘personalized’ medicine, then electronic medical records will potentially have increasing value and relevance for our generations of descendants. This will, however, entail the public having to consider how much privacy and anonymity they are willing to part with in regard to information arising from their medical records. After all, enabling our medical records with the power to influence personalized medicine for our descendants cannot happen without knowing who we, or our descendants, actually are. Read More
By David Arney, Max Senges, Sara Gerke, Cansu Canca, Laura Haaber Ihle, Nathan Kaiser, Sujay Kakarmath, Annabel Kupke, Ashveena Gajeele, Stephen Lynch, Luis Melendez
A new working paper from participants in the AI-Health Working Group out of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University sets forth a research agenda for stakeholders (researchers, practitioners, entrepreneurs, policy makers, etc.) to proactively collaborate and design AI technologies that work with users to improve their health and wellbeing.
Along with sections on Technology and a Healthy Good Life as well as Data, the authors focus a section on Nudging, a concept that “alters people’s behavior in a predictable way without forbidding any options,“ and tie nudging into AI technology in the healthcare context. Read More
Then-Senate Majority Leader Bill Frist was roundly criticized in 2005 for declaring that Terri Schiavo, a Florida woman who had gone into cardiac arrest at age 26, was “not somebody in persistent vegetative state” after viewing videotapes of her condition. The tragic situation is mostly remembered as a low point for federalism and end of life policy.
But there is another issue stemming from the debate that ought to be considered. Although Frist backed away from calling his review of videos an actual diagnosis, it is interesting to think how the use of technology to make a remote determination of a patient’s condition has changed since Frist made his assessment.
Indeed, over a decade later, a New Mexico bill is proposing the opposite: allowing individuals with a terminal illness to utilize telemedicine consultations to seek aid to end their lives. It is not surprising that New Mexico lawmakers would consider telemedicine as part of their proposal. Given its geography, the state has embraced telemedicine as a means of expanding access, and innovative workforce initiatives such as Project ECHO were birthed there.
With plenty of potential healthcare concerns and complications arising out of medical diagnoses and treatments themselves, errors in medical records present an unfortunate additional opportunity for improper treatment.
A recent article from Kaiser Health News (KHN) discussed several examples of dangerous medical record errors: a hospital pathology report identifying cancer that failed to reach the patient’s neurosurgeon, a patient whose record incorrectly identified her as having an under-active rather than overactive thyroid, potentially subjecting her to harmful medicine, and a patient who discovered pages someone else’s medical records tucked into in her father’s records. In addition to incorrect information, omitting information on medications, allergies, and lab results from a patient’s records can be quite dangerous.
The goal of “one patient, one record” provides a way to “bring patient records and data into one centralized location that all clinicians will be able to access as authorized.” This enables providers to better understand the full picture of a patient’s medical condition. It also minimizes the number of questions, and chances of making errors, that a patient must answer regarding their medical conditions and history when they visit a provider.
Other benefits, such as cost and care coordination, also add to the appeal of centralized records.
Suicide is a global problem that causes 800,000 deaths per year worldwide. In the United States, suicide rates rose by 25 percent in the past two decades, and suicide now kills 45,000 Americans each year, which is more than auto accidents or homicides.
Traditional methods of predicting suicide, such as questionnaires administered by doctors, are notoriously inaccurate. Hoping to save lives by predicting suicide more accurately, hospitals, governments, and internet companies are developing artificial intelligence (AI) based prediction tools. This essay analyzes the risks these systems pose to safety, privacy, and autonomy, which have been under-explored.
Two parallel tracks of AI-based suicide prediction have emerged.
The first, which I call “medical suicide prediction,” uses AI to analyze patient records. Medical suicide prediction is not yet widely used, aside from one program at the Department of Veterans Affairs (VA). Because medical suicide prediction occurs within the healthcare context, it is subject to federal laws, such as HIPAA, which protects the privacy and security of patient information, and the Federal Common Rule, which protects human research subjects.
My focus here is on the second track of AI-based suicide prediction, which I call “social suicide prediction.” Though essentially unregulated, social suicide prediction uses behavioral data mined from consumers’ digital interactions. The companies involved, which include large internet platforms such as Facebook and Twitter, are not generally subject to HIPAA’s privacy regulations, principles of medical ethics, or rules governing research on human subjects.
According to a recent Kaiser Family Foundation (KFF) poll, shockingly large swaths of Americans have reported that they don’t have a primary care provider.
The July 2018 report found that 45 percent of 18-29 year olds, as well as 28 and 18 percent of 30-49 and 50-64 year olds, respectively, also lack designated primary care.
Kaiser Health News (KHN) explained that the price transparency, convenience, and speed of alternatives to office-based primary care physician (PCP) visits appear to be some of the preferences driving these patterns. Retail clinics, urgent care centers, and telemedicine websites satisfy many of these preferences, and are therefore appealing alternatives to scheduled appointments with a PCP. For example, extended hours and shorter wait times at increasingly widespread retail clinics have attracted young patients who want to avoid the hassle and wait times involved in scheduling and attending a traditional doctors office.
A 2015 PNC Healthcare survey similarly found that millennials saw their PCP significantly less (61 percent) than baby boomers and seniors (80 and 85 percent, respectively). The study emphasized the effects of technology on millennials’ trends in healthcare acquisition, such as higher utilization of online reviews to shop for doctors (such as Yelp). It also found that millennials are much more likely to prefer retail and acute care clinics, and are more likely to postpone treatment due to high costs than older generations.
How will artificial intelligence (AI) change medicine?
AI, powered by “big data” in health, promises to transform medical practice, but specifics remain inchoate. Reports that AI performs certain tasks at the level of specialists stoke worries that AI will “replace” physicians. These worries are probably overblown; AI is unlikely to replace many physicians in the foreseeable future. A more productive set of questions considers how AI and physicians should interact, including how AI can improve the care physicians deliver, how AI can best enable physicians to focus on the patient relationship, and how physicians should review the recommendations and predictions of AI. Answering those questions requires clarity about the larger function of AI: not just what tasks AI can do or how it can do them, but what role it will play in the context of physicians, other patients, and providers within the overall medical system.
Medical AI can improve care for patients and improve the practice of medicine for providers—as long as its development is supported by an understanding of what role it can and should play.
Four different roles each have the possibility to be transformative for providers and patients: AI can push the frontiers of medicine; it can replicate and democratize medical expertise; it can automate medical drudgery; and it can allocate medical resources.
Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?
While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.
It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.
The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.
Privacy, especially as it relates to healthcare and protecting sensitive medical information, is an important issue. The Health Insurance Portability and Accountability Act, better know as HIPAA, is a legislative action that helps to safeguard personal medical information. This protection is afforded to individuals by the Privacy Rule, which dictates who can access an individual’s medical records, and the Security Rule, which ensures that electronic medical records are protected.
Access to someone’s healthcare records by a medical provider typically requires a direct health care-related relationship with the patient in question. For example, if you have a regular doctor, that doctor can access your medical records. Similarly, if you call your doctor’s office off-hours, the covering doctor, whom may have no prior relationship with you, may similarly access these records. The same holds true if you go to the emergency department or see a specialist. No provider should be accessing protected information however, without a medical need.