News
Artificial Intelligence and Machine Learning

AI Bias: Challenges and Solutions

Posted
Even if training data is not biased, problems can arise due to model trainers’ own biases.
When training data containing bias is fed to AI models, the outcomes will be biased, too.

Bias in artificial intelligence (AI) is not a new problem. In 1988, the U.K. Commission for Racial Equality (now the Equality and Human Rights Commission) found that St. George's Medical School in London had discriminated on racial and sexual grounds  "through the operation of [a] computer program between 1982 and 1986." The algorithm—designed to automate the admissions process—carried negative weightings for "non-Caucasian names" and those of female applicants.

For decades, AI bias was predominantly a thorny technical issue discussed by researchers and developers. Now, thanks in part to the phenomenal popular uptake of generative AI, conversations about bias have been launched into the public sphere. The arena is lively, to say the least: enormous volumes of data are being scraped to train models, some technologies open source, others black-box, and societal divides and volatile 'culture wars' add tensions to the dialog.

Policymakers have started making moves—aspects of the E.U.'s proposed AI Act, such as transparency and explainability, are likely to impact bias, and in the U.S., the National Institute of Standards and Technology has published its "first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias."

However, universal standards for tackling AI bias still do not exist.

Baked-in from the get-go

Bias in AI is "the human bias that is baked into algorithms, machine learning systems, and computational systems," explains Yeshimabeit Milner, founder and CEO of Data for Black Lives (D4BL), which describes itself as "a movement of activists, organizers, and scientists committed to the mission of using data to create concrete and measurable change in the lives of Black people." When training data containing bias is fed to AI models, the outcomes will be biased, too. Says Milner, "To use the old computer science adage: it's garbage in, garbage out."

Bias is not only about perceptions, Milner says, but also about narratives that become entrenched in policy and then "baked into code." She points to how the use of U.S. zip codes in ML-powered credit scoring, introduced by FICO in 1989, has disadvantaged Black communities. While credit scoring does not have a variable for race, the zip code system can stand in for race, as it reflects redlining and segregation policies from the 1930s, Milner explains. "Zip codes have become a proxy for race. If you ask somebody where they live, for their zip code, you can predict beyond a reasonable doubt what race they are."

Sanmay Das is co-director of the Center for Advancing Human-Machine Partnership at George Mason University and chair of ACM's Special Interest Group on Artificial Intelligence (ACM SIGAI). Like Milner, Das flags ML credit scoring as illustrating the pitfalls of bias, adding that as AI becomes increasingly embedded in society, data gaps add to the problem. These gaps occur when groups of people—often from marginalized communities—have been neglected or excluded during data collection processes, or when data about specific groups simply does not exist. Models trained on such data are likely to produce biased or skewed outcomes as a result.

Says Das, AI researchers are "not as good" as social scientists in thinking about samples, and often turn to Web scraping for speed and convenience. "If I go and scrape everything that's happening on the Web to train a chatbot, I'm going to get something that's very different from human society as a whole." Content in English or generated in what Das calls "toxic chat rooms" is likely to be overrepresented due to the large volume of each online, he explains.

Phoenix Perry, an artist and AI researcher in the Creative Computing Institute at the U.K.'s University College London, likens bias to preparing a meal with "tainted ingredients"—in this case, data—that are loaded with biases prevalent online, such as racism, sexism, and transphobia. "If the data or 'ingredients' are flawed, no amount of computational prowess or advanced machine learning can rectify the resulting product. This tainted output mirrors harmful societal biases and perpetuates their existence," Perry says.

Even if training data is not biased, problems can arise due to model trainers' own biases, an issue that is exacerbated by the lower percentage of women than men working in AI, says Arisa Ema of the University of Tokyo's Institute for Future Initiatives and the RIKEN Center for Advanced Intelligence Project. "This bias in our social structure already creates a bias in the designer community, and in algorithm and data selection."

Good data and grassroots

If bias is "baked in," how can it be combatted?

Some solutions take a sector-focused approach. The STANDING Together project team, led by researchers at the U.K.'s University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham, are developing standards for diverse datasets for AI healthcare that better represent society.

In a legal context, at ACM's 2022 FAccT conference, a team from the Centre for Research and Technology Hellas (CERTH) in Greece, the Centre for IT and IP Law in Belgium, and U.K.-based ethical AI specialists Trilateral Research, presented a new approach  for fairness-aware ML to mitigate the algorithmic bias in law enforcement. The researchers used synthetically generated samples to create "more balanced datasets" that mitigated instances of bias—regarding race—that they identified during analysis of the existing data.

For Milner, solutions lie in community engagement and rethinking data collection, areas where D4BL has a track record of instigating change. During the pandemic, the team led the demand to release state-level data by race to investigate COVID-19's disproportionate impact on Black people, and worked with volunteer data scientists to build the codebase to do so. "Every open data portal that released COVID-19 data automatically gave real-time updates on the death and infection rates of Black communities by state; that was a really powerful tool," she says.

Milner is optimist about AI's potential to bring "tremendous advances." However, for everyone to benefit, the power of data needs to be put "back into the hands of the people," she says. Conversations about AI tend to be elite, she says; solutions mean engaging grassroots organizations and "changing the cast of characters" who get to make decisions. "It is about bringing people to the table, literally, by building a movement of scientist activists, Black communities, and the scientific community," Milner says.

As an artist, Perry brings a novel perspective, advocating for the use of small-scale datasets to combat bias and facilitate more human influence over generative AI, especially in creative contexts, "The unique advantage of these datasets is their highly personalized nature," says Perry, who also backs formal regulation to curtail the use of bias "to exploit or introduce bias in datasets for profit, a practice already evident in social media."

Stability AI founder and CEO Emad Mostaque also has flagged the advantages of smaller datasets. Speaking on the BBC's Sunday with Laura Kuenssberg recently, Mostaque said, "Don't use the whole Internet crawled, use national datasets that are highly curated and reflect the diversity of humanity as opposed to the Western Internet as we see it. These models are more likely to be stable; they are more likely to be aligned with humans."

Das agrees with Perry that it is time for regulation. "Companies have to face some form of scrutiny on the kinds of things that they are doing and putting out in the world," he says, pointing to existing regulatory systems in drug development and genetic engineering as examples. "We need to think about having an apparatus that has some teeth; that can try to incentivize appropriate safeguards."

New approaches to data collection and model training and the increased regulation of AI bias look likely; whether developers and policymakers will keep up with the speed of advances is less certain.

 

Karen Emslie is a location-independent freelance journalist and essayist.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More