Sweet talking your computer
by admin on 08/31/10 at 1:36 pm
Originally posted by Clayman Affiliate Clifford Nass in the Wall Street Journal, August 28, 2010.
When BMW introduced one of the most sophisticated navigation and telematics systems into its 5 Series car in Germany a decade ago, it represented the pinnacle of German engineering excellence, with great advances in accuracy and functionality. Yet BMW was forced to recall the product—because the system had a female voice. The service desk had received numerous calls from agitated German men who had the same basic complaint. They couldn’t trust a woman to give them directions.
While this might seem like a story of men’s weird attachment to cars or gender stereotyping run amok, a growing body of research suggests that there is something much deeper at work: People respond to computers and other technologies using the same social rules and expectations that they use when interacting with other people. These responses are not spur-of-the-moment reactions. They run broadly and deeply.
If you were asked how much you liked, say, a plate of lasagna, you would undoubtedly say nicer things to the chef than you would to a person who had no connection to the chef. This would be the polite thing to do. Would you also be overly nice to a computer that tutored you for 30 minutes and then asked how well it taught you?
To find out, I ran an experiment at Stanford University. After being tutored by a computer, half of the participants were asked about the computer’s performance by the computer itself and the other half were asked by an identical computer across the room. Remarkably, the participants gave significantly more positive responses to the computer that asked about itself than they did to the computer across the room. These weren’t overly sensitive people: They were graduate students in computer science and electrical engineering, all of whom insisted that they would never be polite to a computer.
Another social rule is illustrated by the ubiquitous color wars at summer camps, in which half of the camp is arbitrarily assigned to the Red Team and the other half is assigned to the Blue Team. Even though the assignments are random, the Red Team campers suddenly notice that the members of their team are faster, bigger, more skilled, more attractive and have more of every other positive trait compared with the members of the Blue Team; seemingly violating the laws of physics, the Blue Team discovers the same positive attributes about their teammates.
In a set of experiments, my lab determined whether computers could leverage this bonding. For half of the participants, we gave people a blue wristband, put a blue border around the computer, and told the participant that they and the computer were “the blue team.” The other half of participants were also given a blue wristband, but they worked with a green-bordered monitor and were told that they were the “blue person working with the green computer.” Although every other aspect of the 40-minute interaction was the same, the “team” participants thought that the computer was smarter and more helpful and they worked harder because of the special “bonds” between the two teammates.
More than 100 experiments have shown that one can take virtually any finding from the social sciences and apply it to people’s interactions with computers. This isn’t just a scientific oddity; it has been used to improve the design of a number of products. One of the most reviled software designs of all time was Clippy, the animated paper clip in Microsoft Office. The mere mention of his name to computer users brought on levels of hatred usually reserved for jilted lovers and mortal enemies. There were “I hate Clippy” websites, videos and T-shirts in numerous languages. One of the first viral videos on the Internet—well before YouTube made posting videos common—depicted a person mangling a live version of Clippy, screaming, “I hate you, you lousy paper clip!”
Clippy’s problem was that he was utterly oblivious to the appropriate ways to treat people. Every time a user typed “Dear…” Clippy would dutifully propose, “I see you are writing a letter. Would you like some help?”—no matter how many times the user had rejected this offer in the past. Clippy never learned anyone’s names or preferences. If you think of Clippy as a person, of course he would evoke hatred and scorn.
To have Clippy learn about his users would have required advanced artificial intelligence technology, along with a great deal of design and development time. An alternate approach is to use a social strategy. The simplest and most effective way for dislikable people to become more accepted is for them to find a scapegoat.
In an experiment, we revised Clippy so that when he made a suggestion or answered a question, he would ask, “Was that helpful?” and then present buttons for “yes” and “no.” If the user clicked “no,” Clippy would say, “That gets me really angry! Let’s tell Microsoft how bad their help system is.” He would then pop up an email to be sent to “Manager, Microsoft Support,” with the subject, “Your help system needs work!” After giving the user a couple of minutes to type a complaint, Clippy would say, “C’mon! You can be tougher than that. Let ‘em have it!”
The system was showed to 25 computer users, and the results were unanimous: People fell in love with the new Clippy. A long-term business user of Microsoft Office exclaimed, “Clippy is awesome!” An avowed “Clippy hater” said, “He’s so supportive!”
Without any fundamental change in the software, the right social strategy rescued Clippy from the list of Most Hated Software of all time; creating a scapegoat bonded Clippy and the user against a common enemy. Of course, that enemy was Microsoft, which didn’t pursue this strategy. When Microsoft retired Clippy in 2007, it invited people to shoot staples at him before his final burial.
Other research shows that the Wizard of Oz was right when he made the Scarecrow intelligent simply by labeling him a “Doctor of Thinkology.” To test the power of labels, we set out to determine if even a television could benefit from being a “specialist.” Participants were brought into the laboratory and watched segments from news shows and situation comedies. Half of the participants were told that they would watch an ordinary TV that showed both news and entertainment programs. The other half were told that they would watch programs on two identical ordinary televisions: one that happened to only show news, and the other that happened to only show entertainment.
After watching the shows, we asked participants to evaluate what they had seen. Participants who watched the segments on the “specialist” TV thought the news segments were significantly higher in quality, more informative, interesting and serious than did participants in the “generalist” condition. Similarly, the entertainment segments were significantly funnier and more relaxing when watched on the “specialist” television. Thus, even meaningless assignments of “expertise” can result in powerful effects.
The ineffective or effective use of social rules can also affect safety. In 2002, a Japanese car company developed a system that would monitor drivers’ performance and warn them when improvements were appropriate. During one demonstration, the participant exceeded the speed limit and made a turn a little too sharply. “You are not driving very well,” the car said. “Please be more careful.”
The driver was not delighted to hear this valuable information from an impartial source; instead, he became somewhat annoyed. He started to over-steer, making rapid, small adjustments to the wheel; the system reported an increase in driving speed and a decrease in driving distance from the next car. “You are driving quite poorly now,” the car announced. “It is important that you drive better.”
Was the driver now appropriately chastened? No. His face contorted in anger as he started driving even faster, darting from lane to lane without signaling. He swerved back and forth from one side of the lane to the other at a frightening pace, tailgating the cars in front of him. This spiral of negative evaluation, anger, worse driving and more negative evaluation escalated until, in a rage, he smashed into another car in the simulation.
Trying to cheer up unhappy drivers by giving a car an enthusiastic voice, on the theory that “misery loves company,” doesn’t work either. It turns out that the correct saying is “misery loves miserable company.” When you’re angry, there are few things worse than having someone bounce in and say, “Let’s turn that frown upside down!” A much better strategy is to sound negative and subdued, thereby being sympathetic while reducing the driver’s arousal.
This idea was tested in a car simulator study, in which happy or upset drivers used a car with a voice that was either clearly upbeat or morose and downbeat. Not surprisingly, happy drivers had fewer accidents and paid more attention to the road with the happy voice. However, the happy voice made upset participants’ driving much worse: Sad drivers hearing the happy voice had approximately twice as many accidents on average as with the sad voice. Upset drivers also enjoyed driving more, liked the voice more, and thought that the car was of a higher quality when the virtual passenger was sad. Upset drivers even spoke much more with the sad passenger than they did with the happy one. Of course, happy drivers enjoyed driving more, liked the voice more, thought the car was better and spoke more with the happy voice than the sad voice.
While computers are usually thought of as the antithesis of sociality and caring, understanding the value of conformity to social rules by technology has made computers and other machines more likeable, effective and persuasive. Indeed, we may be reaching the point at which our technologies are actually more socially effective than our colleagues.
We now see software that is superior to all but the most suave people with respect to effective praising and criticizing (if you need to give a mix of positive and negative feedback, it turns out it’s better to criticize first, then praise); maintaining consistency when managing people with different personality types; reinforcing team bonding (by emphasizing similarity and shared goals); handling frustration and other negative emotions (don’t allow people merely to vent—empathize); and giving more compelling recommendations by leveraging expertise. It would be ironic if in the future, people will be turning to computers to learn how to win friends and influence people rather than the other way around.
Tags: clifford nass, communication, Faculty Affiliate
Tweets that mention Sweet talking your computer | Gender News -- Topsy.com
Aug 31st, 2010
[...] This post was mentioned on Twitter by Rick F. Barnes, Computer. Computer said: Sweet talking your computer | Gender News: After being tutored by a computer, half of the participants were asked … http://bit.ly/bmubV0 [...]