In October, the Petrie-Flom Center hosted a conference of world-leading experts in HIV/AIDS to discuss one of the biggest public health successes in history: PEPFAR, the President’s Emergency Plan for AIDS Relief. PEPFAR was launched in 2003 in response to a burgeoning global epidemic of HIV. The program offered $2 billion annually, rising to about $7 billion in 2019, to surveil, diagnose, treat, and reduce transmission of HIV around the world.
PEPFAR prevented what could have become an exponentially growing epidemic. It is estimated to have saved more than 17 million lives and avoided millions of new HIV infections. As a result, the speakers at the conference were quick to extol the virtues of the program. Professor Ashish Jha called it an “unmitigated success”; Professor Marc C. Elliott named it a “historic effort”; Dr. Ingrid Katz described PEPFAR as “nothing short of miraculous.”
However, several undercurrents within the conference, as well as more explicit points made by several panelists, suggested the importance of enlarging the discussion beyond PEPFAR itself to include other policies that impact HIV and AIDS, and even other diseases.
Have you ever clicked ‘I agree’ to share information about yourself on a health app on your smartphone? Wondered if the results of new therapy reported on a patient community website were accurate? Considered altering a medical device to better meet your own needs, but had doubts about how the changes might affect its function?
While these kinds of decisions are increasingly routine, there is no clear path for getting information on health-related devices, advice on what data to collect, how to evaluate medical information found online, or concerns one might have around data sharing on patient platforms.
It’s not only patients who are facing these questions in the age of big data in medicine. Clinicians are also increasingly confronted with diverse forms of molecular, genetic, lifestyle, and digital data, and often the quality, meaning, and actionability of this data is unclear.
The difficulties of interpreting unstructured data, such as symptom logs recorded on personal devices, add another layer of complexity for clinicians trying to decide which course of action would best meet their duty of beneficence and enable the best possible care for patients.
I thought I would take this opportunity to reflect on the past year, where I will be in the future, and how the student fellowship has impacted me. I still hope to contribute to the Bill of Health blog going forward, but as my last official post as a Petrie-Flom Student Fellow, I would be remiss if I did not express my sincere gratitude to everyone at the Petrie-Flom Center, the faculty and staff, the other student fellows, and especially my mentors: Professors I. Glenn Cohen, Carmel Shachar, and Intisar A. Rabb.
My own project took a few different turns this year. My original proposal was to explore the ways in which bioethics and biomedical issues will play a significant role in reviving the dialectic between secular scholars and religious authority. Ayman Shabana rightly argues that respect for Islamic religious norms is essential for the legitimacy of bioethical standards in the Muslim context, wherein he attributes the legitimating power of these norms—as well as their religious and spiritual underpinnings—to their moral, legal, and communal dimensions. Building off of Shabana’s work, my initial argument held that the relationship between the secular and religious worlds is important because the discourse between the two, although often presumed to be dichotomous, is not necessarily antithetical nor is it impassable. This led me back to the arguments of the venerable philosophers Alasdair MacIntyre and Charles Taylor whereby, in critiquing the concept of secularism itself along with its historical contexts, furthered my argument and helped me to clarify the significant role that religion plays vis-à-vis categorical issues such as fundamental beliefs and metaphysics. I still maintain this, and it is something I continue to work on, although I decided to take my project in another direction.
If you are a skier like me, you likely revelled in watching the alpine skiing events during this years’ Olympic Winter Games held in Pyeongchang, South Korea. Having raced myself when I was younger, I recall the feeling of being in the starting gate with all the anticipation and excitement it brings. But my memories are more than mere recollections of “images” in my head, for I also have vivid muscle memory, and when watching and cheering for Lindsey Vonn and Ted Ligety, I can literally feel my leg muscles contract as if I were on the course myself. Because I skied for so much of my life, my experience now as a spectator brings me back to the hardwired responses that I can call up even to this day in a very intuitive way simply by visualizing a course.
Researchers at Stanford have now corroborated what athletes and psychologists have long believed: that visualizing ourselves performing a task, such as skiing down a race course, or engaged in other routines, improves our performance and increases our success rate. The findings, reported by neuroscientists in Neuron, suggest that mental rehearsal prepares our minds for real-world action. Using a new tool called a brain-machine interface, the researchers have shown how mental learning translates into physical performance and offers a potentially new way to study and understand the mind.
Could this new tool assist us in replicating cognitive responses to real-world settings in a controlled environment? More studies will need to be carried out in order to further test these findings and better understand the results. And one potential point to take into account is that preforming a real action is different than performing the same task mentally via a brain-imaging interface given that one’s muscles, skeletal system, and nervous system are all working in tandem; but, a brain-imaging interface would indeed seem to have very practical implications for those who use prosthetics or are who are paralyzed. As our knowledge of biomechanics and neuroscience advances, as well as our capabilities to interface the two, we may be able to utilize this technology to assist us in creating more life-like prosthetics and perhaps, harnessing the mind’s inborn processes and complex synapses, help others walk again.
Looking toward the future, another interesting subject of research would be to use a brain-imaging interface to study psychoneuroimmunology. We may not have the technology or ability to conduct such a study at the moment, but it seems plausible that in the near future we could develop the tools needed to conduct more rigorous research on the interactions between psychological processes and the nervous and immune systems. If visualizing winning a ski race improves our performance, why not also envisioning good health outcomes: resilient bodies, strong immune systems, plentiful and efficient white blood cells. Simply willing ourselves to health might not be possible, but, to be sure, having a positive outlook has been shown to impact the outcome of disease, while conversely, increased levels of fear and distress before surgery have been associated with worse outcomes. These are but a few examples of the increasing evidence of the mind’s impact on health. It highlights the importance of recognizing a holistic approach that considers the roles of behavior, mood, thought, and psychology in bodily homeostasis. Read More
Professor Robert Sapolsky, a professor of biology and neurology at Stanford University, rightly identifies depression as a particularly crippling disease insofar as it affects one’s very response mechanisms and modes of coping, namely, experiences of gratitude, joy, pleasure—at bottom, some of the key emotions of resistance and healing. In discussing depression, he provides an overview of the biological and chemical elements, touching on the role of neurotransmitters (epinephrine, dopamine, serotonin) in depression, and a summary of the psychological elements (and their relation to the biological); as such, his description focuses primarily on physical and biological explanations. However, to examine depression or any psychological illness in purely physical and biological terms misses a crucial element, namely: human culture, lived experience, and the different modes or methods of social thought. Culture plays a primary role in defining many mental disorders such as schizophrenia and psychosis, and even the symptoms, intensities, or typologies of depression, according to Arthur Kleinman in his seminal Writing at the Margin: Discourse Between Anthropology and Medicine.
Despite these findings, Western biomedicine by and large continues to analyze mental health in clinical and biological terms. This is not insignificant given the statistics:
Approximately 1 in 5 adults in the U.S.- 43.8 million or 18.5% – experiences mental illness in a given year.
Approximately 1 in 5 youth aged 13–18 (21.4%) experiences a severe mental disorder at some point during their life. For children aged 8–15, the estimate is 13%.
Only 41% of adults in the U.S. with a mental health condition received mental health services in the past year. Among adults with a serious mental illness, 62.9% received mental health services in the past year.
Current trends in medicine suggest that the medical community broadly speaking is ill-equipped to adequately tackle this rising trend, especially with regard to the treatment of diverse patients from various cultures, religions, and social circumstances. To best address the problem, the medical community – both on the level of policy and practice -ought to take steps to understand and treat mental illness more holistically.
The philosopher in me understands that there are universal principles in logic, mathematics, and in basic scientific tenets such as the law of gravity. Be that as it may, the historian in me recognizes that we inherit epistemologies and ways of thinking from those before us, and from our own historical and cultural contexts. Certain ideas dominate the world; and, while some are indeed universal, especially those based on science, the fact remains that a number of other concepts are only seemingly universal. The concepts of personhood, divinity, self, and even society as we tend to understand them today are largely inherited from a Western, Christian worldview. As these ideas have wrestled with philosophical inquiry throughout history, they have either been decoupled from their origins in religious thought, or they have been secularized and rationalized a la Kantian categorical imperatives or the like—and then disseminated in universities, institutions, cultures, and literatures.
On one level, to speak of the Western world as “secular” is, as the philosopher Charles Taylor notes, to say that “belief in God, or in the transcendent in any form, is contested; it is an option among many; it is therefore fragile; for some people in some milieus, it is very difficult, even ‘weird’” (Taylor: 2011, 49). But on another and much deeper level, this very possibility was only ever tenable on account of two major factors: “First, there had to develop a culture that marks a clear division between the ‘natural’ and the ‘supernatural,’ and second, it had to come to seem possible to live entirely within the natural” (Taylor, 50). This was only possible because of a unique philosophical climate that actively sought to dislodge the old form of moral order and social “embeddedness” in an attempt to establish a “purely immanent order.” Taylor’s groundbreaking work, A Secular Age argues that secularism is part of a grand narrative in the West and shows that its historical and cultural foundations are in fact thoroughly Christian and European. He pushes back against Max Weber’s secularization thesis that religion diminishes in the modern world and in the wake of increasing developments in science and technology—and instead gives a different account of what secularism might mean: one that has deep implications for morality, politics, and philosophy.
Think of the last few times you’ve had a very lifelike dream. Running, reading, or having conversations with others, are all activities that might happen during a particularly vivid dream. But would this be considered consciousness? Surely being in a state of sleep is not the same as being in a waking state; but if you are able to communicate, to attend a lecture, perhaps even to give a lecture whilst you sleep, what does this mean in terms of your brain’s activity? Very deep in the sleep cycle, a person may not respond immediately to touch or sound or any other sensory stimulus. That is, they may not wake up, though it cannot be ruled out that an external stimulus might influence the sub-conscious mind and hence their dream. We’ve all had the experience of hearing an alarm “in our dream” which is really our real alarm, yet our mind re-interprets it and incorporates it into our dream until we regain consciousness, i.e., wake up. What if you couldn’t wake up from your unconscious state? And if so, what would this mean for how your brain processes your thoughts? In effect, what would it mean for your lived reality if you could only live in your mind?
Beyond being a fun thought experiment, these may be some very relevant questions now that doctors have treated a vegetative-state patient with an experimental therapy leading him to regain partial consciousness.
It was reported yesterday in National Geographic, Popular Science, the Guardian, and elsewhere that a 35-year-old man who had been in a persistent vegetative state (PVS) for 15 years has shown signs of consciousness after receiving a pioneering therapy involving nerve stimulation. The French researchers reported their findings to the journal Current Biology. Led by Angela Sirigu, a cognitive neuroscientist and director of the Institut des Sciences Cognitives Marc Jeannerod in Lyon, France, a team of clinicians tried an experimental form of therapy called vagus nerve stimulation (VNS) which involves implanting a device into the chest designed to stimulate the vagus nerve. It works by giving off miniscule electrical shocks to the vagus nerve, a critical brain signal that interfaces with parasympathetic control of the heart, lungs, and digestive tract.
There is no lack of controversy when talking about religion and medicine in America today. Medicine is studied, practiced, and firmly rooted in the corporal world while religion draws inspiration from texts, traditions, and the incorporeal. Yet from an historical perspective, religious pasts do shape the present, particularly in the realm of ethics and moral reasoning. Indeed, whatever one’s spiritual or philosophical predilections, religion continues to play a major role in the dialogue on medicine and health care in Western society.
Bioethics in particular has become a topic of growing interest in America, but there has been little critical discussion about its contextual underpinnings, which stem largely from a Western Christian perspective. This is not to say that another religion would arrive at radically different system of morals. While differences do exist amongst religious traditions, across both space and time, experience and common sense tell us that diverse religious traditions do in fact share in much of the same moral principles and foundations. So what might other religious traditions say about, or contribute to, the discourse on bioethics? Should religion even be included in the conversation, especially given that health care and healing belong to the sphere of medicine?