An image-based benchmark could overcome cultural bias stemming from machine learning (ML) training datasets being written in English.
An international group of researchers led by Denmark's University of Copenhagen (KU) developed the Image-Grounded Language Understanding Evaluation (IGLUE) tool, which can score an ML solution's efficiency in 20 languages.
Image labels in ML are typically in English, while IGLUE covers 11 language families, nine scripts, and three geographical macro-areas. IGLUE's images feature culture-specific components supplied by volunteers in geographically diverse countries in their natural language.
KU's Emanuele Bugliarello said the researchers hope IGLUE's underlying methodology could improve solutions "which help visually impaired in following the plot of a movie or another type of visual communication."
From University of Copenhagen (Denmark)
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found