News
Artificial Intelligence and Machine Learning

Can Chatbots Think Before They Talk?

Posted
The avatar of a virtual agent chatbot.
Even the most successful chatbots have been largely driven by fairly simple pattern-matching rules, backed by large databases of typical discussions.

As election seasons prove, thought needs not precede eloquence, a fact as true for conversational algorithms as for politicians. Even the most successful chatbots have been largely driven by fairly simple pattern-matching rules, backed by large databases of typical discussions. While they can reliably respond to "How are you?" with "I’m fine, thanks," few have exploited recent advances in artificial intelligence (AI).

That might be changing, thanks to two Google researchers who propose applying a form of neural network to model conversations. Their data-driven technique uses a "sequence to sequence" learning method that converts full sentences into and out of vectors. Oriol Vinyals, a Research Scientist at Google, believes this will expand the usefulness of machine learning for chatbots. "Until recently, most neural network-based language modeling was just to score or aid speech recognition or machine translation," he said. His co-author and fellow Google Research Scientist Quoc V. Le added, "These recurrent language models can predict the next word in a sentence, which is interesting but limited. In a conversation, you really want to get the next sentence."

While some researchers boost chatbots with AI, others focus on the human side of human-computer interaction (HCI), with impressive results. Bruce Wilcox‘s chatbots have won four of the last six Loebner Prizes, in competitions where chatbots and judges converse in a modified Turing Test. For Wilcox, director of Natural Language Strategy at intelligent messaging platform developer Kore Inc., context is key. "Our bots are designed to have something to say," he said. "Ask another bot a question and it’ll give you an answer, but it’s not a conversation; it’s a query responder. Our bots can stay in a topic, and route to other topics automatically. They can start asking you questions and volunteering corresponding info. It’s a sharing, not an interrogation."

The Nature of Understanding

Wilcox’s chatbots are built on ChatScript, a natural-language engine that organizes data into an ontology featuring "topics," "rules," and "rejoinders." (A similar system, AIML, is an XML dialect that underlies three-time Loebner Prize winner A.L.I.C.E.) According to researcher Luka Bradeško at the Jozef Stefan Institute, ontologies such as ChatScript’s "bring more structure into the system, so you can do more complex queries — you can even call it reasoning." Yet, as he was quick to point out, even with a complex underlying data structure, "It’s still just data retrieval: it doesn’t imply learning." (Bradeško and colleague Michael Witbrock implemented an ontology-based system to create the Android-based chatbot "Curious Cat.")

Aside from ELIZA (which debuted in the pages of Communications in January 1966), one of the oldest and best-known chatbots that also uses a data-centric approach is Cleverbot, which first appeared in 1988, growing its conversational database through chats with creator Rollo Carpenter and his colleagues. Its release in 1997 enormously enlarged its exposure to source data, with visitors participating in hundreds of millions of interactions since. Cleverbot examines the context of whole conversations, but does not necessarily use neural networks to figure out what to say next.

Says Carpenter, "It’s a mistake to think that [chatbots that use] neural networks are fundamentally different from what’s been done before. It’s all too easy to see neural networks as the answer, when in fact they are just one possible answer. They’re just another way to process data and see patterns in data." In discussing Cleverbot’s ongoing development, however, he allows, "We’re now using neural networks ourselves."

Intelligence In, Intelligence Out

Regardless of its thought process, a chatbot’s corpus also greatly affects its responses. Vinyals and Le demonstrated their approach by using two very different sources: for an online tech-support chatbot, they used text extracted from a help desk’s online troubleshooting service; for general conversation, they used conversations found in movie subtitles (the "OpenSubtitles" dataset). It is unlikely either would do well in a test designed for the other.

Wilcox says A.M. Turing’s famous "Imitation Game" was "passed a long time ago. It’s reasonably straightforward to fool a large body of people in a mere five-minute chat. Back in the 1960s, ELIZA fooled people!"

Still, no chatbot has surpassed the bar proposed by the Loebner Prize to gain the top-tier $100,000 award. 

Carpenter says, "My feeling is that the Turing Test will be passed very soon for most people most of the time. That doesn’t mean a chatbot will literally pass a Turing Test that’s run on a formal basis, by people trying to trick or test the machine. It means that for most people, they won’t care. They’ll treat machines as if they’re talking to them intelligently."

Tom Geller is an Oberlin, Ohio-based writer and documentary producer.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More