In 2013, Kim Kardashian entered Cedars-Sinai Medical Center in Los Angeles.
During her hospitalization, unauthorized hospital personnel accessed Kardashian’s medical record more than fourteen times. Secret “leaks” of celebrities’ medical information had, unfortunately, become de rigueur. Similar problems befell Prince, Farah Fawcett, and perhaps most notably, Michael Jackson, whose death stoked a swelling media frenzy around his health. While these breaches may seem minor, patient privacy is ethically important, even for the likes of the Kardashians.
Since 2013, however, a strange thing has happened.
Across hospitals both in the U.S. and beyond, snooping staff now encounter something curious. Through software, staff must now “Break the Glass” (BTG) to access the records of patients that are outside their circle of care, and so physicians unassociated with Kim Kardashian’s care of must BTG to access her files.
As part of the BTG process, users are prompted to provide a reason why they want to access a file. Read More
S. Matthew Liao is Director of Center for Bioethics and Arthur Zitrin Professor of Bioethics at the NYU College of Global Public Health. He uses the tools of philosophy to study and examine the ramifications of novel biomedical innovations.
I thought I would take this opportunity to reflect on the past year, where I will be in the future, and how the student fellowship has impacted me. I still hope to contribute to the Bill of Health blog going forward, but as my last official post as a Petrie-Flom Student Fellow, I would be remiss if I did not express my sincere gratitude to everyone at the Petrie-Flom Center, the faculty and staff, the other student fellows, and especially my mentors: Professors I. Glenn Cohen, Carmel Shachar, and Intisar A. Rabb.
My own project took a few different turns this year. My original proposal was to explore the ways in which bioethics and biomedical issues will play a significant role in reviving the dialectic between secular scholars and religious authority. Ayman Shabana rightly argues that respect for Islamic religious norms is essential for the legitimacy of bioethical standards in the Muslim context, wherein he attributes the legitimating power of these norms—as well as their religious and spiritual underpinnings—to their moral, legal, and communal dimensions. Building off of Shabana’s work, my initial argument held that the relationship between the secular and religious worlds is important because the discourse between the two, although often presumed to be dichotomous, is not necessarily antithetical nor is it impassable. This led me back to the arguments of the venerable philosophers Alasdair MacIntyre and Charles Taylor whereby, in critiquing the concept of secularism itself along with its historical contexts, furthered my argument and helped me to clarify the significant role that religion plays vis-à-vis categorical issues such as fundamental beliefs and metaphysics. I still maintain this, and it is something I continue to work on, although I decided to take my project in another direction.
There is a lot of fascinating research about the brain coming out of Stanford University, with some exciting, cutting-edge work being done there. Early last month I reported on the findings made by neuroscientists at Stanford in understanding how mental rehearsal prepares our minds for real-world action. Today, I’ll outline the recent advances made by a team led by Sergiu Pasca, MD, assistant professor of psychiatry and behavioral sciences at Stanford University, and discuss some of the ethical implications of this research.
Pasca’s method enables him to culture cells in order to form brain organoids with robust structures that are not compromised by cells from other parts of the body, thereby allowing him to more accurately replicate distinct brain regions. Doing so provides greater structural organization and also allows him and his team of researchers to better study and understand pathological mechanisms and perhaps one day to examine the molecular, cellular, and circuit levels of a person’s neurons. This is a promising method and a big step toward greater understanding of psychiatric and neurological disease, leading Pasca to declare, “This is our doorway into personalized psychiatry.” At the same time—although these “brain balls” are not brains, nor do they receive sensory inputs from the outside world—it is clear that as scientists progress in both the techniques and complexity of replication, major ethical questions and dilemmas will arise.
Chief among these will undoubtedly be the perennial ethical debate about the ontology of a human being. Is it only physical, material, social—in which case we might think of ourselves as technicians—or is it spiritual, religious, metaphysical—in which case we would more likely consider ourselves custodians? When we speak about attributing rights to animals or consciousness to AI, it is because at bottom we hold some fundamental belief: about dignity, a soul, being, or about what life might mean in a relational or social and emotional sense. This is no different with Pasca’s brain balls; in fact, it is an even more pressing quandary. As Bruce Goldman notes in his article, “One of the most amazing things about their brain balls was that, with not much chemical guidance, they tended to take on a default structure that’s a facsimile of the most evolutionarily advanced part of the brain: the human cerebral cortex, with all six layers you find in a living human brain.” The ethics of growing human organs are one thing, but the ethics of growing brain balls, which might eventually lead to more and more complex synaptic connections followed by even more elaborate renditions of an actual brain, will become especially contentious given the meaning and significance that we associate with the brain—both biologically and existentially.
With few exceptions, most cultures put homo sapiens at the center or the apex of creation. Humans, it is generally believed, are distinguished from other animals by our self-awareness and our ability to use tools, to think, reason, and construct meaning and representations about life. The Abrahamic religious traditions are most notable in their anthropocentric vision of human purpose in creation; and although the metaphysics and teleology are sometimes challenged by advances in science and technology, the fact remains that human beings remain the paradigmatic case against which other animals or even artificial intelligence is measured. As a Muslim and a theist, I avow my belief in the unique status of humans; however, as someone who also believes in science and is keenly attuned to the environment, I have a great love for nature and the animal world, and a great desire to protect them.
It is with this, then, that I want to propose to put ethics before metaphysics in considering the moral status of what legal scholars and ethicists call “non-human persons.” In particular, I want to look at cetacean intelligence of orcas, dolphins, and whales to understand the way in which we might classify them as non-human persons which would be given certain rights and protections. Doing so, I argue, would enable us to collapse the bifurcations that influences much of Western thought thereby ushering in a more holistic, ecological and relational approach to ethics and being.
To begin with, I would like to make a distinction clear: I am not claiming that orcas, for example, are morally equivalent to humans, but I am suggesting that we ought to be more cautious with regard to understanding our place in the animal world as a whole, particularly as it relates to the precariousness of life itself. My argument below follows philosophical and ethical reasoning, though this might also be understood in the context of religious texts. The story of Yunus (aka Jonah) and the whale is found in both the Bible and the Qur’an. In short, Yunus felt discouraged that the people of Nineveh did not heed his call to worship God, and so he left in anger. Being cast into the sea, followed by being swallowed by the whale, was ostensibly punishment for his loss of hope and leaving the city without God’s permission; though on another level the exegetical scholars point to the fact of his supplication “O Lord! There is no god but you: Glory to you: I was indeed wrong” (Qur’an 21:87) as instructive of submitting to God’s will and the significance of humility. Indeed, the Qur’an goes on to say elsewhere: “Had he not been of those who exalt God, he would certainly have remained inside the whale until the Day of Resurrection.” (Qur’an 37:143-144). The whale, on this reading, is integral to the Abrahamic worldview insofar as it is the manifestation of God’s power and dominion over creation, as well as his lesson to human beings to remain humble. Read More
Medicine is meant to heal our ailments and treat our illnesses. Our deep knowledge of the body and the numerous mechanisms that contribute or correlate to good health is considered a triumph of the medical sciences. We can now perform transplants with relative ease, offer prosthetics to those who require them, and even cure some forms of blindness. But so much of modern medicine today is built around quantitative data—family histories, success and morbidity rates, pathologization, statistical analyses—without much conscious consideration of how one understands, copes, or derives meaning from their experience. True, such data is gathered for the purposes of more accurate diagnoses and as the first defense against an illness or medical condition; but physicians are taught to concentrate on the cure, and while few would dispute that that is certainly a good thing, we also ought to keep in mind that excessive focus on a default measure of “normal” does not necessarily allow us to express the diverse ways of being in the world nor adequately account for the ways in which people embrace their conditions.
Some autistic individuals, for example, believe that autism should be accepted as a difference and not as a disorder. That the autism spectrum is precisely that—a spectrum—is important: on the one hand, statistical analysis may reveal that these individuals are in the minority versus the average population, only 1%; but on the other hand, to take a different perspective, it means merely that the characteristics of these individuals manifest in a way that is atypical with how the institution and culture of medicine classifies them. Lest we forget, medicine is part of the dynamic structure of society and social norms—in the background and the foreground—of knowledge-making, and it is imbedded in place and society, as part of the structures existing in institutions. It is not possible to consider theoretical or epistemological claims apart from practical knowledge and applied sciences. Read More
One of the more contentious bioethical and legal issues is about the beginning of human life. Nor is it difficult grasp why, for beyond political rhetoric it is a subject of considerable philosophical and legal debate and raises a number of questions which are profoundly difficult to answer. Biomedicine can roughly differentiate when life becomes viable, that is, at which point a fetus could survive as an infant if a mother gave birth prematurely; it can likewise recognize potential complications either in the development of the fetus or the health of the pregnant woman. Yet other questions are not as easy to answer, precisely because they tend to fall more in the spectrum of philosophy or personal belief: what constitutes a human being? What is a person? Is a potential life accorded the same rights as an actual life? For that matter, are there rights to begin with automatically, or are there criteria that must be met in order to procure rights? In short, questions that strike at the very core of who we are.
A number of these questions were debated by Muslim theologians and legal scholars in the pre-modern world when considering contexts of abortion or issues surrounding paternity. In the modern world, these questions have grown to include in vitro fertilization and surrogacy amongst others. Muslim scholars continue to grapple with these bioethical questions as the medical sciences grow more advanced and technology allows us to have ever more control over the basic aspects of reproduction, growth, and development. Per the question, When does human life begin? for example, Mohammed Ghaly analyses in an important article, “The Beginnings of Human Life: Islamic Bioethical Perspectives” some of the newer discussions and positions Muslim scholars have taken vis-à-vis contemporary bioethics and independent legal reasoning (ijtihad). Complementing this discussion is also a seminal article by Ayman Shabana, “Paternity Between Law and Biology: The Reconstruction of the Islamic Law of Paternity in the Wake of DNA Testing.” Shabana shows how classical rulings pertaining to paternity issues continue to hold higher authority, even despite the advent and availability of modern technology that would ostensibly challenge that authority. This is interesting for a number of reasons, not least of which is the possible change in perspective with regard to how religious authority is derived and its relationship to the medical sciences. Read More
An important questions in Islam, recurrent across time and space, is whether Islamic political theory recognizes rights claims against the state as distinct from rights claims against other members of the community. This continues to be an important subject today, intersecting the fields of law, religion, and moral philosophy. The classical tradition is divided on the matter, with the legal theory of the Shafi’i school of jurisprudence saying that rights are to be accorded viareligious authority, while the Hanafi school emphasized the universality of the notion of human inviolability (dhimma)—and the innate rights that derive from it—as God-given, universal, and applicable to all societies from the beginning of time.
Whereas in Western law there is generally a separation between law and ethics, in the Islamic tradition, there is more of a dialectical tension between the two: Where religious inwardness is more highly developed, attitude and intention are weighed more heavily, whereas in its absence however formalism and legalism are advanced as the ethical ideal.
Much of what we fear about artificial intelligence comes down to our underlying values and perception about life itself, as well as the place of the human in that life. The New Yorker cover last week was a telling example of the kind of dystopic societies we claim we wish to avoid.
I say “claim” not accidently, for in some respects the nascent stages of such a society do already exist; and perhaps they have existed for longer than we realize or care to admit. Regimes of power, what Michel Foucault called biopolitics, are embedded in our social institutions and in the mechanisms, technologies, and strategies by which human life is managed in the modern world. Accordingly, this arrangement could be positive, neutral, or nefarious—for it all depends on whether or not these institutions are used to subjugate (e.g. racism) or liberate (e.g. rights) the human being; whether they infringe upon the sovereignty of the individual or uphold the sovereignty of the state and the rule of law; in short, biopower is the impact of political power on all domains of human life. This is all the more pronounced today in the extent to which technological advances have enabled biopower to stretch beyond the political to almost all facets of daily life in the modern world. Read More
The philosopher in me understands that there are universal principles in logic, mathematics, and in basic scientific tenets such as the law of gravity. Be that as it may, the historian in me recognizes that we inherit epistemologies and ways of thinking from those before us, and from our own historical and cultural contexts. Certain ideas dominate the world; and, while some are indeed universal, especially those based on science, the fact remains that a number of other concepts are only seemingly universal. The concepts of personhood, divinity, self, and even society as we tend to understand them today are largely inherited from a Western, Christian worldview. As these ideas have wrestled with philosophical inquiry throughout history, they have either been decoupled from their origins in religious thought, or they have been secularized and rationalized a la Kantian categorical imperatives or the like—and then disseminated in universities, institutions, cultures, and literatures.
On one level, to speak of the Western world as “secular” is, as the philosopher Charles Taylor notes, to say that “belief in God, or in the transcendent in any form, is contested; it is an option among many; it is therefore fragile; for some people in some milieus, it is very difficult, even ‘weird’” (Taylor: 2011, 49). But on another and much deeper level, this very possibility was only ever tenable on account of two major factors: “First, there had to develop a culture that marks a clear division between the ‘natural’ and the ‘supernatural,’ and second, it had to come to seem possible to live entirely within the natural” (Taylor, 50). This was only possible because of a unique philosophical climate that actively sought to dislodge the old form of moral order and social “embeddedness” in an attempt to establish a “purely immanent order.” Taylor’s groundbreaking work, A Secular Age argues that secularism is part of a grand narrative in the West and shows that its historical and cultural foundations are in fact thoroughly Christian and European. He pushes back against Max Weber’s secularization thesis that religion diminishes in the modern world and in the wake of increasing developments in science and technology—and instead gives a different account of what secularism might mean: one that has deep implications for morality, politics, and philosophy.