When my son Matthew was two years old, I told him, "It's not polite to eat so fast." He asked, "How come you don't yell at Honey (our dog)? She eats much faster than me." "Dogs don't have to be polite," I answered. Matthew replied, "Then why do you tell Honey it's bad when she barks at someone?"
Later that day, Matthew and I were walking along the street when he pointed and said, "Look at that ugly dog." I told him, "You shouldn't say things like that." He said, "Do I hurt a dog's feelings when I say bad things about it?" Once again, a teachable moment presented itself; like many parents, I resorted to, "Let's get some ice cream and eat it slowly."
The difficulty of explaining why people are polite to dogs opened up an opportunity for a fundamental insight into human-computer interaction: If people are polite to dogs, even when such behavior is inexplicable, might people also be polite to computers?
One of the most fundamental rules of politeness is that when one asks a question about oneself after stating one's own opinion, the respondent is supposed to agree . Thus, the answer to the question, "I think I'm a great teacher; what do you think?" the polite response is, "you're great." If the question is asked by a neutral third party, one can give an honest answer rather than be polite. This leads us to the rather odd-sounding conclusion: If people apply the same politeness rules to computers that they do to other people, a computer that asked about itself would receive more positive evaluations than a different computer or paper-and-pencil inquirey about the original computer's performance.
Inspired by this question, we developed an experimental paradigm (for complete details, see ) in which participants performed three tasks with a voice-based computer (a black-and-white NeXT computer with no text on the screen other than radio buttons at the bottom of the screen for the user's responses): They were tutored; they took a test; and they received (from the computer) an evaluation of their own performance and a (positive) evaluation of the computer's own performance.1 We selected expert computer users to ensure the participants didn't believe it was reasonable to be polite to a computer.
After the evaluation session, individuals answered 21 questions concerning the computer's performance. One-third of the participants were presented with the textual questionnaire on the computer they had just worked with; one-third answered the identical questions on a computer on the other side of the room; and the final one-third answered the questionnaire with paper and pencil. Each adjective was presented with a 10-point scale ranging from "Describes Very Poorly" to "Describes Very Well."
The results of the experiment are presented in the figure here. Stunningly, participants behaved toward the computer in the same way they do toward other people. That is, they gave significantly more positive responses than when they were asked the identical questions in the identical order by either a computer on the other side of the room or via paper and pencil; that is, they were polite to the initial computer. A follow-up experiment demonstrated the same pattern held when people used a less human-like, text-based computer(s).
These studies initiated the idea that people were polite to computers, and they have become the canonical examples (for example, [3, 9, 11]) of the Computers Are Social Actors paradigm [7, 9], namely, people treat computers and expect computers to behave using the same rules and heuristics as they do when interacting with other people.
These responses are different in two ways from the well-known seeming social responses such as pleading with a car to start or screaming to a quarterback to throw the ball. First, in the latter behaviors, individuals are well aware they are behaving socially, while in our experiment(s), people adamantly deny a social response. Second, the responses in our experiments were not spontaneous and short-lived; individuals had as much time as they wished to make attributions.
Since these studies, our lab and others around the world have demonstrated that people extend a wide range of rules of etiquette to computers. People will reciprocate to a computer: In the U.S., an individualistic culture, users will help a computer that helps them more than if a different computer asks for the favor . Conversely, in a collectivist culture like Japan, people will politely reciprocate to the second computer if it is the same brand as the first, but not a different brand . People also prefer computers that are polite: A telephone-based system that blames the user for misrecognitions is viewed more negatively and is less effective than one that politely blames external circumstances, such as noise on the line. Flattery is also very effective: People respond to it as positively as praise known to be sincere . In all of these experiments, people insisted they were not applying the rules of etiquette, even though it became clear they were.
Why don't people hold dogs and computers to the same standards of polite behavior as people? The obvious answers seemed insufficient. "Dogs/computers aren't expected to [or smart enough to] be polite" can be challenged by various situations in which polite behavior is demanded and learned.
Why is it that it's inappropriate to make negative comments to an animal and computer that doesn't understand what is being said? "The dog/computer might feel bad" attaches a level of intellectual sophistication that is clearly unwarranted. "It is good practice for when you talk with people" or "it's a habit" lead to questions of why we don't feel obligated to thank a rock for being particularly strong. "The owner/programmer might hear" is belied by the fact that people tend not to criticize dogs (except for strays) even when the owner isn't present. In essence, both adults and children expect that dogs and computers will be polite and should be treated politely.
Unfortunately, direct attempts to understand politeness toward dogs and computers are doomed to failure. "Do you expect dogs/computers to be polite?" or "Do you try to be polite to dogs/computers?" are questions that suggest the person should anthropomorphize before answering, and even if we could overcome that obstacle (which seems highly unlikely), it would be extremely difficult to explain what politeness would mean in a non-human context. Furthermore, the link between what people say they do and what they actually do is quite weak .
What makes people adopt an inclusive approach to polite responses and polite expectations, even when they know that those responses are illogical? In his introduction, Chris Miller suggests the underlying process is "agentification." Why is agentification so broadly assigned and why does that assignment lead to politeness? One possible explanation comes from evolutionary psychology.
Humans evolved in a world in which their most significant opportunities and problems, from food to shelter to physical harm, all revolved around other people. In this environment, there would be a significant evolutionary advantage to the rule: If there's even a low probability that it's human-like, assume it's human-like. This explains why the right ear (which corresponds to the left-side of the brain) shows a clear advantage in processing not only native language but also nonsense syllables, speech in foreign languages, and even speech played backward. The left ear attends to all other sounds : If it's close to speech, humans assume it's speech (that is, a person speaking). It also explains why people see human faces in cars (for example, a Volkswagen Beetle starred in a number of movies), sock puppets, and even a simple circle with two dots in the upper half and a curved line at the bottom half (the "Smiley") .
What, then, are the cues that encourage people to treat a computer (or anything else) as a social actor that warrants and is expected to exhibit human speech? We hypothesize that at least each of the following triggers etiquette responses, with entities having more of these likely eliciting both broader and more robust polite responses and polite expectations:
While humans encompass all of these elements, computers, pictorial agents, interactive voice response systems (which elicit a number of "please" and "thank you" responses), robot (and live) dogs, and many other technologies exhibit some of these characteristics and seem to elicit some social responses.
Pessimists examine the expansiveness of etiquette and frown at another faulty heuristic that demonstrates the limited cognitive processing skills of people. I take a different view: Polite responses to computers represent the best impulse of people, the impulse to err on the side of kindness and humanity.
4. Katagiri, Y., Nass, C., and Takeuchi, Y. Cross-cultural studies of the computers are social actors paradigm: The case of reciprocity. Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents, and Virtual Reality. M.J. Smith, G. Salvendy, D. Harris, and R. Koubek, Eds. Lawrence Erlbaum, Mahwah, NJ, 2001, 15581562.
1For example, "You answered question 3 correctly. This computer (criterion 1) provided very useful information," or "You answered question 5 incorrectly. However, this computer provided useful facts for answering this question." The "self-praise" comments were employed to give participants the sense that the computer perceived its performance positively, thereby increasing the feeling the person, if they were to be polite, would respond positively in return. The computer did not say "I" or attribute emotion to itself in order to eliminate the possibility that participants believed that we intended them to anthropomorphize.
©2004 ACM 0002-0782/04/0400 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2004 ACM, Inc.
No entries found