Sign In

Communications of the ACM

BLOG@CACM

Trolleyspotting


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Robin K. Hill, University of Wyoming

Near-obsession with the Trolley Problem in our field allows this author to assume that the reader is familiar with it, and perhaps with a recent article on that subject in this publication [AwadEtAl]. In my last post [Hill2020], I deprecated the Trolley Problem, saying that if it must be solved, then we face problems much bigger. I should explain.

In short: The Trolley Problem is not the kind of a problem that is open to a solution. The Trolley Problem is not for solving, it's for teaching—for stimulating, for illustrating, for provoking, for exposing predilections and contradictions. It's a thought experiment. (Philosophy also performs thought experiments with zombies.) The point is not to work out the answer to a riddle; the point is to think about the implications of the circumstances. We open Pandora's Box, but we don't intend to catch the demons and stuff them back in; we let them fly around wreaking havoc because we intend to examine the damage.

That sounds alarming. So do the alternatives presented in the Trolley Problem. But we don't worry about those alternatives. Even when a similar situation arises (rarely!), in real life, there are other factors at play, circumstances are subject to shift at the last instant, moral reasoning is impractical, and moral rigor is likely abandoned. Here's a thought experiment: What would a "solution" in the human realm look like?—Some rule or set of rules for actions based on conspicuous circumstances, accommodating infinite detail, and reflecting a world-wide consensus or some other universal approval. It's a good thing we're not waiting.

Even if the framework were feasible, the content is debatable. People don't agree. As Lin says, "...to systematically decide in a certain way—for instance, to always protect the driver über alles—could be faulted" [Lin], which stikes me as a gracious understatement. The article by Awad et. al. defends the efficacy of Trolley Problem studies in the development of autonomous vehicles, but the authors admit that "Using realistic crash scenarios would make it difficult to tease out the effect of multiple contributing factors and make it difficult to draw general conclusions beyond the highly specific set of circumstances that they feature" [AwadEtAl]. Their research performs experimental philosophy, by surveying respondents about what actions the self-driving car should take, in the process gathering statistics that reveal cultural and national differences. The insights are interesting, and offer, as the authors say, avenues for public engagement. But their account is not prescriptive ethics, it's descriptive ethics, amalgamating the opinions of (a large sample of) the current population. Certainly we want to believe that averaging thousands of judgments on the right thing to do actually identifies the right thing to do, but history teaches differently. The authors' call for more dialogue with the humanities is welcome. But this author believes that dialogue with the humanities will reveal the Moral Machine to be an oxymoron.

Hence, another thought experiment: Let's imagine that there actually is a solution suitable for autonomous vehicles. What would this "solution," in the algorithmic realm, look like?—Some assessment of situational factors and calls to libraries of general knowledge, then a calculation of worthiness according to quantified criteria extracted from alternative disasters, all computed in picoseconds, implementing actions programmed in a decision tree. Such an implementation is a mechanical proxy for ethics. Current versions of the Trolley Problem stem from the work of Phillipa Foot and Judith Jarvis Thomson, some years ago [WikiTrolley]. Thomson opines that killing is worse than letting die, but warns that this thesis "cannot be used in any simple, mechanical way in order to yield conclusions" about moral problems, that the cases have to be considered individually [Thomson]. Considered individually means that not all the variables can be known, nor all the contingencies bound to appropriate actions, in advance. That is the basis of my claim that if the Trolley Problem must be solved, then we face bigger problems.

Far be it from me to discourage any sort of speculative treatement of the Trolley Problem and its fellows, which has yielded significant and conscientious research [Wolkenstein and its citations and many others]. I agree with both Rodney Brooks that the setup is artificial to the point of absurdity [Brooks] and with Patrick Lin that "We can’t reject ethics because thought experiments are so fake" [Lin]. My point is a narrow one. Any sincere consideration of the philosophical issues will enhance our understanding, but fall short of a solution. Computer science needs such exercises because our philosophical debility leaves us vulnerable to the law of the instrument [WikiLawInstrum], where AI is the hammer and every social question, a nail. Insofar as we look askance at the laypersons' clumsiness with file formats, blind faith in election systems, and sorry excuses for passwords, then we must also look askance at our own misconception of the Trolley Problem as just a milestone along the road, so to speak, something that can be and will be "solved" by the momentum of automated vehicle development.

I have no advice to offer an autonomous vehicle on whether to hit the parent and baby or sacrifice the passenger. If such a decision is actually necessary for a marketable vehicle, and if we are determined to bring it to market, then some software design group will decide that question, in conjunction with legal and actuarial expertise. But it won't be elegant, it won't be inspired, it won't be a triumph of ethics. And it won't be a solution to the Trolley Problem.

References

[AwadEtAl] Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. Crowdsourcing moral machines. Commun. ACM 63, 3 (March 2020), 48–55.

[Brooks] Rodney Brooks. 2017. Unexpected Consequences of Self Driving Cars. Blog post from January 12, 2017.

[Hill2020] Robin K. Hill. 2020. Computing Ethics and Teaching It. BLOG@CACM. Blog posted July 6, 2020.

[Lin] Patrick Lin. 2017. Robot Cars And Fake Ethical Dilemmas. Forbes Magazine. Apr 3, 2017.

[Thomson] Judith Jarvis Thomson. 1976. Killing, Letting Die, and the Trolley Problem. Monist, 59, 204-217.

[WikiLawInstrum] Wikipedia contributors. (2020, September 2). Law of the instrument. In Wikipedia, The Free Encyclopedia. Retrieved September 10, 2020.

[WikiTrolley] Wikipedia contributors. (2020, September 1). Trolley problem. In Wikipedia, The Free Encyclopedia. Retrieved September 12, 2020

[Wolkenstein] Andreas Wolkenstein. 2018. What has the Trolley Dilemma ever done for us? On some recent debates about the ethics of self-driving cars. Ethics and Information Technology 20:3, 163-173.

 

Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.


 

No entries found