That next tweet you receive on Twitter may not come from a live person at all but from a social bot, a tiny program designed to mimic real users.
Indeed, a 2009 study found that 24% of all tweets are created by bots, not humans.
Some are known to be malevolent; in November, 2011, social bots made off with 250GB of personal information belonging to thousands of Facebook users.
However, last January, three independent researchers started a project to determine what positive effects bots could have in social media such as Twitter. What they discovered was that the bots–swarms of automated, intelligent identities that interact, encourage, and provoke communities toward certain behaviorscould serve as "virtual social connectors," accelerating the natural rate of human-to-human communication.
Most surprising to the researchers was that the bots produced a 43% increase in human-to-human "follows" after bots were inserted.
"That was totally shocking to us," says Tim Hwang, one of the researchers and CEO of their San Francisco-based Pacific Social Architecting Corp. "You’d think only gullible people would be fooled by a bot. But we found bots were able to change behavior in human-to-human interaction–and that enables a sort of social shaping to grow communities in a certain way."
For example, Hwang explains, a bot might send out a tweet that says, "Hey, Bob! Check out my friend Bill. I think you’ll find him interesting."
Using such tactics, Hwang says it’s possible to take a group of people who, say, like classical music and then use bots to get them to appreciate jazz by "stitching together" their social network with one composed of people who like jazz.
"Bots are the social equivalent of someone who is the life of the party and is really good at introducing one person or group to another," Hwang says. "What we are doing here is introducing that force into social media and, as a result, we are improving the connection rate between people."
Their most recent field test report explains how Hwang and his two fellow researchers– CTO Max Nanis and chief scientist Ian Pearce–followed the activity of 2,700 Twitter users over a 54-day period. They used no bots for a 33-day control period and then deployed one bot into each of nine experimental groups of 300 users each for 21 days.
"The bots were programmed to operate strategically in ways intended to foster connections between users in their respective target groups," says Nanis.
During the control period, groups saw an average connection rate of 626 new follows per day; during the experimental period, the average rate jumped 43% to 901 new follows per day.
With this sort of results, opportunities abound for bot usage in the real world, says Hwang.
"I can imagine a 'stop smoking' campaign, for instance, that would use bots instead of public service announcements," he says. "Bots could stitch together two groups–one that is really successful at quitting smoking and one that isn’t–and create communications between the two and communities of support for the smokers."
Marketers may also find bots useful as a way to promote products–or political activists to influence voters.
"The cost of bot production is so cheap, you’re going to see more of them in the future as opposed to fewer," Hwang predicts.
Their next project is to determine how easy it is to scale up bot usage and "launch swarms of bots to do various large-scale social shaping," he says. "For instance, we might insert 200 bots to introduce two groups of 10,000 people each."
Paul Hyman is a science and technology writer based in Great Neck, NY.
My research group has done some analysis of the social bot challenge organized by Tim, and published selected results at a workshop at this year's World Wide Web conference (2012) in Lyon, Paris:
C. Wagner, S. Mitter, C. Krner and M. Strohmaier. When social bots attack: Modeling susceptibility of users in online social networks. In Proceedings of the 2nd Workshop on Making Sense of Microposts (MSM'2012), held in conjunction with the 21st World Wide Web Conference (WWW'2012), Lyon, France, 2012
available at: http://kmi.tugraz.at/staff/markus/documents/2012_MSM12_socialbots.pdf
I think one of the interesting findings of our paper is that people who are most active on social networks (and who should have the highest social media competencies theoretically) seem to be most vulnerable to these kinds of attacks.
It is also interesting to see that it took bots at least 7 days of activity, and lots of tweets, to elicit a reaction from targets (discounting auto-follow behavior).
Tim and his colleagues Ian Pierce and Max Nanis were the guest editors for the featured article in the most recent issue of ACM Interactions, Socialbots: voices from the fronts, which includes some additional context for this work as well as contributions by other folks pushing the boundaries of social robotics research and development.
Here's a link to the aforementioned paper ...
Displaying all 3 comments