Sign In

Communications of the ACM

ACM Opinion

The Field of Natural Language Processing is Chasing the Wrong Goal


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook

Researchers too focused on whether AI systems can ace tests of dubious value should be testing whether systems grasp how the world works.

Credit: Ms Tech | Unsplash

At a typical annual meeting of the Association for Computational Linguistics (ACL), the program is a parade of titles like "A Structured Variational Autoencoder for Contextual Morphological Inflection." The same technical flavor permeates the papers, the research talks, and many hallway chats.

At this year's conference in July, though, something felt different—and it wasn't just the virtual format. Attendees' conversations were unusually introspective about the core methods and objectives of natural-language processing (NLP), the branch of AI focused on creating systems that analyze or generate human language. Papers in this year's new "Theme" track asked questions like: Are current methods really enough to achieve the field's ultimate goals? What even are those goals?

My colleagues and I at Elemental Cognition, an AI research firm based in Connecticut and New York, see the angst as justified. In fact, we believe that the field needs a transformation, not just in system design, but in a less glamorous area: evaluation.

 

From MIT Technology Review


View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
ACM Resources