Sign In

Communications of the ACM

ACM TechNews

Facebook Dataset Addresses Algorithmic Bias


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Part of a Facebook dataset.

A Facebook dataset, designed to help researchers improve fairness in their artificial intellgence models, hosts some 45,000 videos of participants sharing their age and gender.

Credit: Facebook

Facebook has made public its Casual Conversations dataset, comprised of more than 45,000 videos of 3,000 individuals of different skin tones sharing their age and gender, to help artificial intelligence (AI) researchers address potential algorithmic bias in their computer vision and audio models.

Facebook AI's Cristian Canton Ferrer said the dataset aims to address "the critical need within the AI community to [improve] the fairness of AI systems" and "the lack of high-quality datasets that are designed to help measure this fairness in AI."

The social media giant considers the dataset to be relatively unbiased because participants provide their ages and genders for content labeling, instead of using third-party or computer system estimates.

Annotators trained in using the Fitzpatrick scale, a skin classification system, developed labels for participants' skin tones and marked videos with ambient lighting conditions to gauge how skin tones are handled by AI systems in low-light conditions.

From The Wall Street Journal
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account