Reality star Kim Kardashian at the CFDA Awards at the Brooklyn Museum on June 4, 2018.

Can Kim Kardashian Help Bioethics? Celebrity Data Breaches and Software for Moral Reflection

In 2013, Kim Kardashian entered Cedars-Sinai Medical Center in Los Angeles.

During her hospitalization, unauthorized hospital personnel accessed Kardashian’s medical record more than fourteen times. Secret “leaks” of celebrities’ medical information had, unfortunately, become de rigueur. Similar problems befell Prince, Farah Fawcett, and perhaps most notably, Michael Jackson, whose death stoked a swelling media frenzy around his health. While these breaches may seem minor, patient privacy is ethically important, even for the likes of the Kardashians.

Since 2013, however, a strange thing has happened.

Across hospitals both in the U.S. and beyond, snooping staff now encounter something curious. Through software, staff must now “Break the Glass” (BTG) to access the records of patients that are outside their circle of care, and so physicians unassociated with Kim Kardashian’s care of must BTG to access her files.

As part of the BTG process, users are prompted to provide a reason why they want to access a file.

Alongside the growth of electronic health records, specialized add-on software capacities, such as BTG, have also emerged. Some systems even ask users “moral” questions, which I detail in the following sections. Yet, the importance of these systems is far greater than the problem of patient privacy.

I believe that using tools to compel on-the-spot moral reflection shows the powerful moral potential of systems in bioethics.

 

Can Software Trigger Moral Thinking?

Given its use for health data, it is natural to ask if a tool such as BTG software is effective for patients’ privacy, trust, and overall patient wellbeing. As it is so new, research on its impact on patient protection is only now emerging. While privacy and HIPAA are important ethical concerns, there is a second moral aspect of this software for bioethics, which is how systems can be designed to compel — if only momentarily — instances of moral reflection. In other domains, systems do this all the time. Electronic lane departure systems in cars, for examples, alert drivers in ways that compel the driver to pay attention, slow down, take stock one’s surroundings, and so on.

In some versions of BTG software, users requesting records access encounter a set of strange questions. For example, in the case below, the user is asked, “Are you sure you want to view another employee of the hospitals[sic] medical records?”

The posing of this question is compelling for several reasons.

For one, it induces users to take a moment to reflect. Am I accessing the file for the right reasons? Is there another way to get to this solution? What alternatives exist? In most BTG systems, users are still able to access patient data once the glass is “broken,” but they are required to enter a reason, as well as information about their role in the health organization (see example 1).

However, this prompt, which likely forces a user to take a moment to reflect, shows the way that a system can be designed to compel the user, potentially, into a deeper level of thought. 

Practically, taking moment to consider our actions may make the difference between one action and another; between a bad outcome and a better one. In truth, it often does.

But the ability to create questions designed to produce certain modes of moral thinking is powerful. Imagine if in digital job application systems, contexts where ethnic or female-sounding names are at a distinct disadvantage, strategic messaging reminded HR recruiters of likely inadvertent bias related to those names. The implications are far-reaching.

For scholars, the importance of moral reflection is plentiful. For philosophy, moral reflection fuels the engine of our virtues. For moral psychology, reflective capacity is crucial to one’s agency, one’s very capacity to act in the world. Given this, moral thinking may be a powerful stage for ethical intervention.

Of course, I am not suggesting that software acts in some magical way.

BTG must still be part of a set of guidelines, subject to review, enforcement, and audit. Also, clinicians and staff must still be educated on the ethical importance of patient privacy and trust. Technology often doesn’t work and some imperfect, biased person must program it. And yes, much of clinical ethics becomes a mundane “check-box” of to-dos.

Still, there is something to the ability to redirect the power of digital systems, now omnipresent throughout global healthcare, towards focused moral aims. What if BTG software did more? Could software be used to intervene strategically during other critical moral moments?

 

Systems and the Future of Bioethics

To suggest that there is a role for software in ethics is not at all to say that it is a fix. Software cannot “fix” people. Rather, what I propose is far, far more nuanced. Given be-a-good-person modes of moral intervention (training people on ethics; sanctioning bad actors) is proven to be limited in terms of its effectiveness, is there an obligation to also create systems that help to support or sustain — even in small ways — ethical aims? Here are some following points to consider around BTG, moral reflection, systems and bioethics.

 

What if Such Software Went Further – Some protocols around patient health information allow patients to digitally customize which aspects of their record are accessible and by whom. Might patients wish to hide reproductive health history from billing departments, who might be granted limited access? What about dynamic consent, whereby patients can change their preferences electronically as their health status, information, and needs change? Only through a digital means could such preferences be updated over time to reflect “reasonable” patient requests.

 

Moral Reflection – The ability of software to pose morally meaningful questions could really be exploited. However, future versions could give patients, at the moment they are selecting their privacy preferences, critical information about what these options mean. By providing information at the moment of decision-making, systems could help the broader aim of informing patients enough to provide consent. This also matters because one must balance patient desires for privacy against the need to ensure good care. For patients that have selected to keep aspects of their record private (such as sexually transmitted infections, for example) software could give authorized providers digital prompts around “best practices” for supporting specific patient populations, providing crucial information and strategies.

 

Cognitive Friction – For many critics, modern technology destroyed critical thinking. These critics are missing the point. Software can, by design, force thinking. Software designed to intentionally compel “cognitive friction” forces users to mentally “slow down” and is a key part of how asking users’ moral questions, for example, may compel something approaching a kind of momentary mental focus, or ethical “slow down.” BTG processes work by forcing this same cognitive friction, intentionally causing users to stop default modes of action and thought.

 

People are Not Perfect – Approaches that rely solely on a world composed of good people (a vision in which all hospital workers could simply be educated to never break moral rules, for example) always fail. There is a kind of “semi-automatic ethics” that systems can help to support (never perfectly) that does not rely on pristine moral actors with perfect memories and copious amounts of mental clarity — a presumption that informs much of clinical ethics and bioethics generally. Can staff who are overworked, mentally exhausted, and multi-tasking achieve the kind of Herculean morality in bioethics textbooks?

Interestingly, there is something similar in the story of seatbelts. While there were decreases in driving-related fatalities with the introduction of highway speed limits (an approach that relies on driver adherence and punishment via police), it was adding a technology–seatbelts–to an ecosystem of supports, which dramatically (though not entirely) reduced accident deaths. Seatbelts still rely on behavior, but seatbelts work even when drivers are speeding, making poor decisions, or victims of other drivers. Moral ecosystems are most powerful when they assume that not all actors will, can, or care to “do the right thing.” The brilliance of the seatbelt is its assumption of a world where cars crash and drivers fail.

 

Software Is no Panacea – Some will misunderstand my argument as an argument that technology will solve our moral problems. Some will point out how technologies can fail, be corrupted, or be without enforcement. Indeed, software is no panacea. Technologies are not a perfect solution. Yet, I believe that is a role for assistive devices – discrete structural supports to help particular aims. What would it mean for bioethics to also consider systems that can help, assess, collect data around, and otherwise support larger bioethical aims as part of a moral ecosystem?

 

By adding systems to our bioethical repertoire, we move beyond classic ethical approaches that rely largely or solely on educating individuals to be morally good people—namely, compelling staff to “respect privacy” or have greater empathy. We should both encourage people (and educate them) to be better, but also imagine how systems could support these flawed selves and intervene precisely in areas where we need moral support most.

 

Mark Dennis Robinson is a 2018-2019 Petrie-Flom Student Fellow. 

Mark Dennis Robinson

Mark Robinson earned his Masters in Bioethics at Harvard Medical School in 2019, with a project that explored the intersection of technology and ethics. A graduate of the University of Chicago, he also holds a PhD from Princeton University, where he held the Presidential Fellowship. In Summer 2019, Mark will join Georgia Institute of Technology as a visiting scholar. Mark is also the author of a forthcoming book, "The Market in Mind: How Financialization Is Shaping Neuroscience, Translational Medicine, and Innovation in Biotechnology," about the ethical and scientific impacts of the increasing financialization of neuroscience (and of translational science and medicine in general) that will be published by MIT Press in 2019. Mark's fellowship project, "Ethics for a Frail Subject: Systems, Technology, and a Theory of Global Moral Impairment," considered how bioethics might be designed around an understanding of human beings as "impaired subjects" that accounts for biological impediments to human morality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.