Sign In

Communications of the ACM

ACM TechNews

Beautiful or Handsome? Neural Language Models Try Their Hand at Word Substitution


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Consulting a dictionary.

Researchers have run a large-scale computational study of the most advanced neural language models to see how they handle lexical substitution, a crucial task in natural language processing.

Credit: CC0 Public Domain

A contest among five neural language models was held by researchers at Russia's Skolkovo Institute of Science and Technology (Skoltech), Samsung Research Center Russia, HSE University, and Lomonosov Moscow State University.

The models competed in lexical substitution tasks, including plain substitution and word sense induction (for example, when a machine must differentiate between the bank of a river and a bank as a financial institution).

The researchers demonstrated which models tend to generate semantic relations of which types (synonyms, hypernyms, and more), and that additional data about the target word can boost lexical substitution quality substantially.

Skoltech's Alexander Panchenko said the outcomes may be helpful for language learning, enhancement of textual data for training neural networks, and writing assistance like "automatic suggestion of synonyms and text reformulation."

From Skolkovo Institute of Science and Technology (Russia)

View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found