Sign In

Communications of the ACM

ACM TechNews

It's Your Fault Microsoft's Teen AI Turned Into Such a Jerk


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
A confused vision.

An online chatbot that Microsoft unveiled online had to be pulled down the same day because Twitter users had trained it to use with offensive language.

Credit: Microsoft Corp.

Microsoft unveiled a new online chatbot on Twitter Wednesday morning but took it offline by the evening because Twitter users coaxed it into regurgitating offensive language.

Called Tay, the chat bot is designed to respond to messages in an "entertaining" way, impersonating 18- to 24-year-olds in the U.S.

There were no humans making the final decision on what Tay would publicly say. Tay was likely trained with neural networks using vast troves of online data to train the bot to talk like a teenager. The system evaluates the weighted relationships of two sets of text--often questions and answers--and resolves what to say by picking the strongest relationship.

Microsoft's Xiaoice has a big cult following in China, with millions of young people interacting with the chatbot on their smartphones every day, and the success of this project probably gave Microsoft the confidence it could replicate it in the U.S.

Dennis R. Mortensen, CEO and founder, x.ai, says humans should not necessarily blame the artificial-intelligence technology because it is just a reflection of who we are.

Ryan Calo, a law professor at the University of Washington who studies AI policy, says in the future developers could employ a mechanism for labeling so the process of where Tay is pulling responses from is more transparent.

From Wired
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found