News
Artificial Intelligence and Machine Learning

First, There Were Bug Bounties. Now, Here Come the Bias Bounties

Posted
In 2021, Twitter was the first company to launch a bounty on algorithmic bias. Other companies have followed suit.

As more companies start deploying artificial intelligence (AI) and machine learning (ML) to automate and streamline processes, they need to address an increasingly important issue: bias in AI algorithms.

Bias can unwittingly find its way into algorithms when incomplete or bad data is used, and/or when a developer team lacks a diversity of viewpoints, thought processes, and life experiences. This can have reputational, regulatory, and revenue impacts on a company.

To ensure an algorithm doesn't have flaws or inherent bias, organizations can deploy an AI bias bounty, which industry observers say is a cost-effective way to get a number of different eyes on an algorithm. People who identify serious issues typically receive a monetary reward.

"No matter how much you try to eradicate bias from AI and AI models, it's going to exist,'' says Brandon Purcell, vice president and principal analyst at Forrester Research, of the need for algorithmic bias bounties. "Oftentimes, because companies aren't engaging with potential stakeholders of an algorithm, there's going to be certain blind spots about how AI will discriminate against a group of people."

Companies can crowdsource and pay people to mitigate those blind spots before the algorithm ends up impacting a lot of people, Purcell says. Case in point: In 2021, Twitter became the first company to launch an AI bias bounty.

The social media company received roughly 100 different identifications of bias from its community, but ended up paying a $3,500 prize to the person who identified the most pervasive bias in their algorithm. The winning entry was a feature Twitter offers that uses AI to determine where and how to crop a photo, Purcell says. A programmer found that the feature "favored light-skinned, younger faces," while cropping out older and darker-skinned people, "So there is very clear bias," he says.

This is emblematic of the problems that have been seen with facial recognition systems, which are "very much subject to algorithmic bias,'' Purcell observes. He is quick to clarify that this is a bit of a misnomer because it's not the algorithm that is biased, but the training data being fed into it.

Paris, France-based insurance tech provider Zelros put itself on the line with an AI bias test by participating in a tech hackathon, similar to a bias bounty, organized in the summer of 2021 by the French Prudential Supervision and Resolution Authority, (ACPR) and Banque of France. Because banks and insurers must choose tech platforms that meet regulatory compliance, Zelros wanted to demonstrate there were no biases and discrimination in its AI algorithms, says CEO and founder Christophe Bourguignat.

"We wanted to demonstrate that a third party's algorithms are not biased and we also want to influence future regulation to show [that] integrated levels of compliance are in our platform to put us in a good position to gain market share," Bourguignat explains.  

Zelros ended up placing second in the event, out of 12 competing teams.

Casey Ellis, founder and CTO of Bugcrowd, a crowdsourced security platform, says it's important that people with AI and ML skills "start to take an adversarial view of what they do."

The companies Bugcrowd works with offer rewards to people who find ways to break into their organizations. Since AI and ML are relatively new technologies, it's a good idea to start "kicking the tires" to identify and protect algorithms from bias, Ellis says.

He adds that their AI bias bounty work to date has been in the context of private programs in the financial services, social media and retail sectors.

Ellis believes interest in AI bug bounties has grown since the 2020 presidential election.  "The conversation around the manipulation of social media algorithms for election interference shone a light on the fact that machine learning can be exploited to cause unintended consequences,'' he says. "That was when we first started seeing folks want to talk to us about adversarial AI bias test. Bias in any form that is unintended in any construct is bad."

Forrester is predicting that a half-dozen companies will launch AI bias bounties this year, mainly in financial services, since it's a highly regulated industry and "you can't afford to get this wrong," Purcell says.

Asked about the size of bug bounties generally, Purcell said, "I'm afraid I cannot give a range, but I'd suggest companies scale payments relative to the ubiquity and severity of the bias detected."

Purcell said he thinks companies have not yet jumped on the AI bias bounty bandwagon because it could expose them to regulatory and reputational risk. "Twitter took a risk by offering a bias bounty and fortunately, there wasn't much backlash when [someone] identified bias in their photo-cropping algorithm,'' Purcell says. "Yes, your dirty laundry is going to be exposed,'' so the question becomes, "are you going to face consumer and regulatory backlash for that, or will consumers forgive you because it's in service of doing the right thing?"

It remains to be seen.

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More