Sign In

Communications of the ACM

ACM TechNews

Pay Users to Spot Bias in AI, Say Top Researchers

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Fears have grown that algorithmic bias could result in discriminatory outcomes for certain groups.

Leading researchers has proposed a novel initiative to make artificial intelligence more trustworthy: offer users monetary rewards in exchange for spotting bias in algorithms.

Credit: Getty Images

Leading researchers at institutions such as OpenAI, Google, and the U.K.'s Alan Turing Institute and University of Cambridge have proposed a system of financially rewarding users to spot bias in algorithms.

The concept was inspired by the bug bounty programs created to encourage software developers to look for flaws. Potential users could include artificial intelligence (AI) researchers with direct access to algorithms, the public, and journalists who encounter apparent bias in everyday systems.

Cambridge's Haydn Belfield said incentivizing users to rigorously check AI systems could help identify problems earlier in development, while OpenAI's Miles Brundage suggested monetary rewards would encourage developers to spot issues not covered in public documentation.

The Alan Turing Institute's Adrian Weller said financial compensation could encourage greater transparency on algorithmic bias, but cautioned that full transparency could reveal how to exploit such systems.

From Financial Times
View Full Article - May Require Paid Subscription


Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


No entries found