Opinion
Artificial Intelligence and Machine Learning

The Problem with the Trolley Problem and the Need for Systems Thinking

Routes toward more deliberation in technology development.

Posted
Credit: Getty Images caution light for a trolley

In 1967, English philosopher Philippa Foot wanted to discuss the doctrine of double effect2: The difficulty of evaluating acting from good intentions whilst also bringing about harm as a side effect. She invited readers to imagine being “the driver of a runaway tram which he can only steer from one narrow track onto another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed.”

To explore moral reasoning, people have created variations of what is now known as the Trolley Problem. For instance, a situation in which you have a lever that you can use to divert the trolley, so that one person is killed instead of five, or a situation in which you stand on a bridge above the track, with a person whom you can throw in front of the trolley to stop it. (Fewer people will choose to throw this person because it feels like killing that person, whereas pulling the lever feels like having the trolley do the killing.) Meanwhile, the Trolley Problem has become a trope, appearing in a comedy series,a and in many conversations about technology and ethics.

The Problem

The Trolley Problem has inspired scores of psychology experiments, including MIT’s Moral Machine,1 an online survey where people had to decide what a self-driving car should do in case of an impending accident. Participants were given a series of pairs of scenarios, presented as map-like diagrams, with various numbers and types of pedestrians and passengers. For each pair of scenarios, they had to choose between options such as driving ahead and killing pedestrians, or veering into an obstacle and killing passengers. Based on 40 million responses from more than 200 countries, they found general preferences, such as sparing humans over animals. They also found differences between cultures. People from countries with collectivistic cultures prefer sparing lives of older people instead of the lives of younger people. And people from poorer countries with weaker institutions tend to spare pedestrians who cross illegally.

Clearly, Foot did not intend to present a practical problem that we can solve through computer science or software engineering. However, maybe such experiments can provide some clues for programming self-driving cars? Maybe indeed. Maybe not.

There are at least two reasons why a study like the Moral Machine is inadequate in guiding the development of self-driving cars. First, it probed people’s preferences. Scottish philosopher David Hume famously reminded us to not confuse is and ought3: You cannot derive moral statements from empirical findings. If people do X or say that they would do X, it does not automatically follow that X is morally acceptable or recommendable. This is called the naturalistic fallacy: the incorrect belief that what is the case also ought to be. Furthermore, the survey used scenarios that depict different types of pedestrians, rather stereotypically: homeless people, with a baggy coat; business people, with a suitcase; medical personnel, with a first-aid kit; and criminal people, with a mask and loot. Participants probably took these differences into account in the survey. In practice, however, people do not carry such identifiers, and it would be undesirable or unrealistic to require them to do that, so self-driving cars cannot take into account their identities.

The problem with invoking the Trolley Problem in discussions of ethics and technology is that people might believe that surveying people’s preferences can “solve” ethical questions and that computer scientists and software engineers can “solve” ethics through calculation and optimization. (An interesting view on the Trolley Problem, from a virtue ethics perspective, is from Liezl van Zyl.10) In this Communications Opinion column, I present an alternative approach, more aligned with the original function of the Trolley Problem as a thought experiment: To promote ethical reflection and deliberation regarding the development and deployment of technologies7,8 and ultimately result in more ethical design decisions.

Context

First, we can put the trolley, the workers, and the moral agent in a larger context to better understand the problem and to envision multiple approaches to enhancing the safety of emerging technologies. The original description is parsimonious on purpose: a trolley with a broken brake, two tracks with people, and you with a lever. Real life, however, is more complex. We can turn to systems thinking4 to understand and appreciate how seemingly separate elements relate within a larger sociotechnical system around the trolley.

For example, looking at the trolley itself, why is there no horn on the trolley, to warn people on the track? And why did the brakes not work properly? Was there a flaw in the design of the brake system? Looking at the company, has it examined past accidents to learn from them? Was there a lack of maintenance? If so, why? Was the maintenance budget insufficient? Were maintenance experts made redundant? How does the trolley company prioritize spending their budget? Do they pay fair salaries? Are they motivated or rewarded to report errors or accidents? Or do they work with subcontractors that employ “gig workers? Did the trolley driver get sufficient rest between shifts? How are their working conditions? Zooming out even further, we can ask questions about regulation and governance. What health and safety standards are in place for people who work on the tracks? Did the state privatize the trolley services? How is competition regulated between different service providers? Are there incentives for them to collaborate on safety? Is there a traffic control center? How well does it function?

By zooming out and zooming in and asking these kinds of questions, we can understand the situation in a larger sociotechnical system. This allows us to identify additional system levers that have contributed to the accident and that can be used to prevent future accidents or reduce harms.

Agency

Second, we can reflect on our agency as professionals. We can do more than pull a lever. Drawing attention to the interaction between agency and structure helps to empower researchers, developers, designers, engineers, and managers. What can you do? And how does the context in which you work—the organization and its culture, the project and its management—influence your agency?

Typically, a Trolley Problem has only two options: pull the lever or not; run over pedestrians or crash into an obstacle. Such a narrow view on agency is so unrealistic that it makes very little practical sense as a thought experiment to educate professionals. You can do so much more than choose between pulling a lever or not pulling a lever. You can use your curiosity to better understand the wider context. You can use your creativity to come up with all sorts of solutions, from the very simple ones (for example, yell loudly to warn the workers at the track) to more complex ones (for example, empower repair and maintenance personnel to learn from past accidents and implement solutions to prevent future accidents). As professionals, we can take all sorts of initiatives, in diverse domains, to prevent accidents with trolleys—and with all sorts of other technologies.

Systems Thinking

We can also use systems thinking, on at least three different levels, to systematically look for opportunities to improve ethical reflection and deliberation, and to more ethically develop and deploy technologies. First, we can look at the level of a specific application, in the case of the Trolley Problem, the malfunctioning of the brakes, the people who are—or should be—involved in a specific development and deployment project, and their different roles and agency, for example, who can or should or dares to start a conversation on an uneasy topic such as malfunctioning brakes and preventing future accidents. We can find “value levers” to integrate ethics into the design process.6 Second, we can look at the level of organizations, in the case of the Trolley Problem, the various organizations involved in development and deployment of brake systems, maintenance efforts, and so forth. On this level of analysis, how do people and machines interact, and how do they get information from the environment and interact with the environment? Third, we can look at the level of society and look at, for example, how the application of specific technologies has impacts on different groups and how benefits and costs are distributed; for example, owners of self-driving cars versus pedestrians; or citizens who pay taxes for public infrastructures or transport versus corporations that evade paying tax and benefit from such infrastructure.

Systems thinking typically involves moving back and forth between different levels of abstraction, to develop a more complete picture. When people talk about abstract concepts, like justice, you can zoom-in and ask a question about some practical detail. And when people talk about details, you can zoom-out and ask more general questions. For example, for the design and application of an AI system, this would involve looking beyond the algorithm in a narrow sense. It would involve looking at the processes in which algorithm will be used, how it will be used, and how its deployment might affect these processes over the course of time.9 We can imagine, for example, an AI system’s impact on collaboration between front-end employees and customers, or between civil servants and citizens.

Systems thinking also highlights the importance of feedback loops; these provide information about the direction in which the system is moving, and mechanisms to steer, or even change, the system. Without such feedback, “a statistical engine can continue spinning out faulty and damaging analysis while never learning from its mistakes.”5 These feedback loops can turn into virtuous circles, if they help to bring about desirable outcomes, or into vicious cycles, if they do not—of which many examples are available.

Donella Meadows, a pioneer of systems thinking, suggested we can have the most leverage on steering or changing a system if we question its goals and underlying assumptions (paradigms) of the system as a whole—rather than focus on separate elements on the surface.4

You can do so much more than philosophize about choosing between pulling or not pulling that lever. If you engage in ethical reflection and deliberation through a lens of systems thinking, you can exert that leverage and contribute to the development of morally and socially more beneficial technologies.

    References

    • 1. Awad, E. et al. The moral machine experiment. Nature 563, 7229 (2018).
    • 2. Foot, P. The problem of abortion and the doctrine of the double effect. Oxford Rev. (1967).
    • 3. Hume, D.A.  Treatise of Human Nature. London, U.K., 1739.
    • 4. Meadows, D.H. Thinking in Systems: A Primer. Chelsea Publishing, White River Junction, VT, 2008.
    • 5. O’Neil, C. Weapons of Math Destruction. Penguin, London, U.K., 2016.
    • 6. Shilton, K. Values Levers: Building ethics into design Science, Technology, and Human Values 38, 3 (Mar. 2013).
    • 7. Steen, M. Ethics as a participatory and iterative process. Commun. ACM 66, 5 (May 2023).
    • 8. Steen, M. Ethics for People Who Work in Tech. CRC Press, Boca Raton, FL,2022; https://bit.ly/3JmzDjA
    • 9. Steen, M., Timan, T., and van de Poel, I. Responsible innovation, anticipation and responsiveness: Case studies of algorithms in decision support in justice and security, and an exploration of potential, unintended, undesirable, higher-order effects. AI and Ethics 1, 4 (Apr. 2021).
    • 10. Van Zyl, L. Virtue ethics and the trolley problem. The Trolley Problem. H. Lillehammer, Ed. Cambridge University Press, Cambridge, U.K., 2023.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More