At a typical annual meeting of the Association for Computational Linguistics (ACL), the program is a parade of titles like "A Structured Variational Autoencoder for Contextual Morphological Inflection." The same technical flavor permeates the papers, the research talks, and many hallway chats.
At this year's conference in July, though, something felt different—and it wasn't just the virtual format. Attendees' conversations were unusually introspective about the core methods and objectives of natural-language processing (NLP), the branch of AI focused on creating systems that analyze or generate human language. Papers in this year's new "Theme" track asked questions like: Are current methods really enough to achieve the field's ultimate goals? What even are those goals?
My colleagues and I at Elemental Cognition, an AI research firm based in Connecticut and New York, see the angst as justified. In fact, we believe that the field needs a transformation, not just in system design, but in a less glamorous area: evaluation.
From MIT Technology Review
View Full Article
No entries found