Limits of Technological Solutions to Moral Problems

By Matthew L Baum

In my last blog post, I suggested that we consider incentivizing scientists and engineers to develop technologies that side-step ethical dilemmas entangling certain current technologies. I highlighted that these morally modifying technologies 1) neither resolve a moral debate nor do they take a side, 2) usually do not function empirically better than existing technology, and 3) make a moral dilemma less practically problematic by providing a technological work-around. I highlighted induced pluripotent stem cells, blood recirculators, and fixed-time ventilators as three examples of morally modifying technologies. But when is it a bad idea to encourage the development of morally modifying technologies?

In response to an excellent comment on that post by Joanna Sax, I would like to extend my initial description of technological solutions to moral problems to a discussion of their limits and the potential problems that might accompany them. I will begin here with the three externalities Joanna suggested and start a discussion on how they might be avoided.

Externality 1: pushing scientists away from an area of inquiry (like embryonic stem cells), e.g. through decreased funding,

Judging this as a negative externality depends on the moral assumption that pushing scientists away from a certain area is bad, an assumption that is sometimes but not always true. One way it could be bad is if the former technology was in an important way superior to the new one and we lost something of great value in abandoning it. If the new technology is worse than the 1st technology in no way, but is less morally objectionable (even to a group of people), on the other hand, there would be no argument that being pushed away from the first is a bad. For example, the Wyss institute at Harvard is trying to develop a “Lung on a chip” to simulate the dynamics of human lung tissue by growing human lung cells on a clever chip; if lung on a chip turns out to be no less informative about toxicity or potential efficacy of new drugs than existing animal models (to which some object), then it would not be a bad thing (and potentially a very good thing) to push scientists away from continued use of animal models for this type of research. If the predictive validity of lung-on-a chip becomes moderately better than existing animal models, we might even see internal review boards (IRBs) requiring use of lung-on-a-chip over animal models.

Morally modifying technologies could also have an indirect economic effect, however, on the ability of scientists to conduct meaningful work. Society might end up “paying double” for some technological functionality: paying once for the development of the 1st technology (morally problematic) and again for the morally modifying technology. This extra cost might stymy the progress of science and health care improvement. But, because the social and political costs of a moral dilemma can also stymy progress, the sidestepping of the morally modifying technology may actually allow us to better achieve our initial goals (if used by more people and thus to save more lives, for example) and be worth the extra development cost.

Another caveat is that there would need to be mechanisms in place to minimize the risk of overgeneralizing the fungibility of morally modifying technologies, for example, lung on a chip as a replacement for all basic pulmonary research using animal models if lung on a chip cannot realistically do so. This overgeneralization would be a worry if funding were determined by those who did not understand the technology’s practical limitations, which brings us to discuss the second externality.

Externality 2: enhancing problematic non-scientist control of science.

The core of this externality is the worry that non-scientists may make this sort of error of overgeneralization. If funding agencies have strong technical advisory roles, or applications are evaluated by scientists, however, the risk of this sort of overgeneralization would be reduced. If the grants for development of morally modifying technologies are decided upon by a mixed committee with a strong scientific presence, moreover, 1) scientists would still be able to guide the direction of research and 2) risk of the governmental grant process being highjacked for petty partisan goals would be minimized.

That said, there are good reasons why non-scientists should have a say about the direction of science: one is that their tax dollars are funding it and another is that they do not have the same biases and potential conflicting interests as scientists (of course they have different biases and conflicts) and is the reason why a lay person is required to be present on all IRBs. So increasing public interest in the direction of science would not necessarily be a bad thing.

Externality 3: encouraging development of work-around technologies every time a group doesn’t like something might border on a policy of censorship.

First, examples like the Tuskeegee syphilis experiment and the Willibrook hepatitis experiment have highlighted the importance that science should not have the same unbounded freedom as does artistic expression. Limits to scientific freedom, therefore, may not be as morally problematic as limits to freedom of speech or art. Certain experiments should not be conducted – even if that means that important scientific questions will remain unanswered. This is a major reason why some of our bread-and-butter drugs in medicine have never undergone a rigourous clinical trial: we already have good reason to think they work (though not the hard hard data) and so it would be unacceptable to conduct a trial where a large portion was intentially getting what we expect to be worse treatment.

Second, funding for development of morally modifying technology may not come exclusively from the government and thus could enable the funding of research that would not ordinarily be funded under existing paradigms (and thus actually increase scientific freedom). For example, if private foundations began to offer grants for scientific research of morally modifying technologies (and those foundations would not ordinarily fund science), then the total funding available for science would increase and so would the possible diversity of research.

I certainly have not addressed all possible externalities of morally modifying technology here, but hopefully have provided a rough beginning to the discussion of how we might responsibly promote engineering solutions to moral problems.

mbaum

During his fellowship, Matthew Baum was a second year MD-PhD student in the Health Science and Technology (HST) combined program of Harvard and MIT where he integrated his interests in clinical, scientific, and ethical aspects of mental health. He holds a DPhil at the Oxford Centre for Neuroethics where his doctoral work, supported by a Rhodes Scholarship, concerned the ethical implications of the development of predictive biomarkers of brain disorders. Matthew also completed an MSc in Neuroscience at Trinity College Dublin as a George Mitchell Scholar and holds a BS and an MS in Molecular Biology from Yale. During his medical and neuroscience training he maintained a strong engagement with neuroethics; he has acted as the student representative to the International Society for Neuroethics. During his time at the Petrie-Flom Center, Matt researched the intersection of biological risk and disorder.

3 thoughts to “Limits of Technological Solutions to Moral Problems”

  1. Thank you for your thoughtful follow-up to my comment. I’ll offer a couple of thoughts about your post, but I hope we have a chance to continue our discussion off-line or at a future conference. The first concern, that it pushes scientists away from a particular area, does not necessarily require a moral assumption about the type of research; rather, it says: “you can only ask certain questions if they don’t offend some people’s morals.” Scientific inquiry needs room to challenge.

    Second, I agree that non-scientists should be involved in the funding of basic science. The NIH includes many non-scientists in the decisions regarding the funding of grant applications. But, the initial review of grants is conducted by scientific experts. That is, the grant application must show scientific merit to receive a score and move to the next stage of review.

    Third, I agree with you that some experiments should not be conducted – the Tuskegee syphilis experiment is an example. But, not everything rises to the level of the egregious conduct in the Tuskegee syphilis experiments and there are other ways to protect scientific integrity and patient safety that do not require morally modifying techniques. Recombinant DNA technology is a good example where the scientific community came together to create safeguards to allow scientists to readily utilize this important technology.

    I very much enjoy your posts and the thoughts that you bring to the table. I look forward to continuing this discussion with you. Thanks!

  2. Thank you for your thoughtful follow-up to my comment. I’ll offer a couple of thoughts about your post, but I hope we have a chance to continue our discussion off-line or at a future conference. The first concern, that it pushes scientists away from a particular area, does not necessarily require a moral assumption about the type of research; rather, it says: “you can only ask certain questions if they don’t offend some people’s morals.” Scientific inquiry needs room to challenge.

    Second, I agree that non-scientists should be involved in the funding of basic science. The NIH includes many non-scientists in the decisions regarding the funding of grant applications. But, the initial review of grants is conducted by scientific experts. That is, the grant application must show scientific merit to receive a score and move to the next stage of review.

    Third, I agree with you that some experiments should not be conducted – the Tuskegee syphilis experiment is an example. But, not everything rises to the level of the egregious conduct in the Tuskegee syphilis experiments and there are other ways to protect scientific integrity and patient safety that do not require morally modifying techniques. Recombinant DNA technology is a good example where the scientific community came together to create safeguards to allow scientists to readily utilize this important technology.

    I very much enjoy your posts and the thoughts that you bring to the table. I look forward to continuing this discussion with you. Thanks!

  3. Thank you for your thoughtful follow-up to my comment. I’ll offer a couple of thoughts about your post, but I hope we have a chance to continue our discussion off-line or at a future conference. The first concern, that it pushes scientists away from a particular area, does not necessarily require a moral assumption about the type of research; rather, it says: “you can only ask certain questions if they don’t offend some people’s morals.” Scientific inquiry needs room to challenge.
    Second, I agree that non-scientists should be involved in the funding of basic science. The NIH includes many non-scientists in the decisions regarding the funding of grant applications. But, the initial review of grants is conducted by scientific experts. That is, the grant application must show scientific merit to receive a score and move to the next stage of review.
    Third, I agree with you that some experiments should not be conducted – the Tuskegee syphilis experiment is an example. But, not everything rises to the level of the egregious conduct in the Tuskegee syphilis experiments and there are other ways to protect scientific integrity and patient safety that do not require morally modifying techniques. Recombinant DNA technology is a good example where the scientific community came together to create safeguards to allow scientists to readily utilize this important technology.
    I very much enjoy your posts and the thoughts that you bring to the table. I look forward to continuing this discussion with you. Thanks!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.