The 21st Century Trolley

By Gali Katznelson

Here’s a 21st century twist on the classic ethics trolley dilemma: The trolley is a car, you are the passenger, and the car is driving itself. Should the autonomous car remain on its course, killing five people? Should the car swerve, taking down a different bystander while sparing the original five? Should the car drive off the road, and kill you, the passenger, instead? What if you’re pregnant? What if the bystander is pregnant? Or a child? Or holds the recipe to a cure for cancer?

The MIT Media Lab took this thought experiment out of the philosophy classroom by allowing users to test their moral judgements in a simulation. In this exercise, participants can decide which unavoidable harm an autonomous car must commit in difficult ethical scenarios such as those outlined above. The project is a poignant perversion of Philippa Foot’s famous 1967 trolley dilemma, not because it allows participants to evaluate their own judgements in comparison with other participants, but because it indicates that the thought experiment actually demands a solution. And fast.

Several companies including Google, Lyft, TeslaUber, and Mercedes-Benz are actively developing autonomous vehicles. Just last week the U.S. House of Representatives passed the SELF DRIVE (Safely Ensuring Lives Future Deployment And Research In Vehicle Evolution) Act unanimously. Among several provisions, the act allows the National Highway Traffic Safety Administration to regulate a car’s design and construction, and designates states to regulate insurance, liability and licensing. It also paves the way for the testing by car manufacturers of 25 000 autonomous cars in the first year, and up to 100 000 cars within three years.

There are some clear public health advantages to the rise of self-driving cars. They could help reduce pollution, because they promote car sharing and would spend less time idling. They could help individuals with disabilities get behind the wheel, if designed with accessibility in mind. Most significantly, they have the potential to greatly reduce harm to human life. They could save over 29,000 lives a year from traffic accidents in the U.S., since almost all road deaths are caused by human error. The question remains, whose lives, exactly, should they spare when harm to one party is unavoidable?

Public opinion is a good place to start to answer this question. A recent study revealed that most people are utilitarian when it comes to the 21st century trolley problem: they would sacrifice one individual to save the five. However, these participants also reported that they were less likely to ride in a car that would not protect them at all costs, pointing to the disconnect between moral principles and self preservation. This response poses a dilemma to those whose job it is to sell cars. Should manufacturers default to the consumer’s self-interest in the design of self-driving cars, or should they adopt a different ethical framework?

Last year, Mercedes-Benz’s Manager of Driver Assistance Systems and Active Safety appeared to support the self-interested view of the consumer, by reportedly claiming that Mercedes-Benz will program its autonomous cars to prioritize the safety of their passengers above all else. Mercedes-Benz later issued an official statement reporting that the executive had been misquoted and that the company was not entitled to weigh the value of human life.

The company’s official statement aligns with the world’s first guidelines on autonomous driving cars, presented several weeks ago by the German government and developed by the Ethics Commission at the German Ministry of Transport and Digital Infrastructure. The guidelines, comprising 20 principles, state that cars should prioritize human life over property as well as over the lives of animals. They also comment on valuing lives against each other:

In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties.

By my reading, the final sentence points to the rise of suicidal cars. It is unclear which “parties” the ethics commission considers to be “those involved in the generation of mobility risks.” If it refers to the passengers in the car, the statement suggests that in fact, the lives of bystanders may be prioritized over the lives of those in the car, since aren’t passengers always “involved in the generation of mobility risks”? This risk will affect self-driving car buyers who generally aren’t comfortable riding in a car that could kill them to save a pedestrian. It will also affect car manufacturers, who need to find a way to market potentially suicidal cars to consumers.

Elsewhere, the guidelines seem to circumvent the potential for autonomous vehicles having to make difficult ethical decisions at all. They state:

Based on the state of the art, the technology must be designed in such a way that critical situations do not arise in the first place. These include dilemma situations, in other words a situation in which an automated vehicle has to “decide” which of two evils, between which there can be no trade-off, it necessarily has to perform.

The guidelines are a bold first step toward developing a regulatory framework for the inevitable future of autonomous vehicles. And yet, the prospect of a car that does not encounter “dilemma situations” seems unlikely, at least in the near future. It is certainly a tall order for manufacturers, who will need to wrestle with the tensions between consumer preferences, such as self preservation,  and ethics.

Gali Katznelson

During her fellowship year, Gali Katznelson was an MBE candidate at the Center for Bioethics at Harvard Medical School. Before her master's degree, she completed a bachelor’s degree in Arts & Science at McMaster University in Canada. Her fellowship project focused on clinicians' perceptions of the uses and regulations of smartphone mental health apps.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.