News
Artificial Intelligence and Machine Learning News

Algorithmic Poverty

Algorithms can have a devastating impact on people's lives, especially if they already are struggling economically.
Posted
user, UX, and AI icons, illustration
  1. Article
  2. Author
user, UX, and AI icons, illustration

“Life isn’t fair” is perhaps one of the most frequently repeated philosophical statements passed down from generation to generation. In a world increasingly dominated by data, however, groups of people that have already been dealt an unfair hand may see themselves further disadvantaged through the use of algorithms to determine whether or not they qualify for employment, housing, or credit, among other basic needs for survival. In the past few years, more attention has been paid to algorithmic bias, but there is still debate about both what can be done to address the issue, as well as what should be done.

The use of an algorithm is not at issue; algorithms are essentially a set of instructions on how to complete a problem or task. Yet the lack of transparency surrounding the data and how it is weighed and used for decision making is a key concern, particularly when the algorithm’s use may impact people in significant ways, often with no explanation as to why they have been deemed unqualified or unsuitable for a product, service, or opportunity.

“There are well-known cases of AI (artificial intelligence) and machine learning models institutionalizing preexisting bias,” says Chris Bergh, CEO of DataKitchen, Inc., a DataOps consultancy. Bergh notes that in 2014, Amazon created an AI model that screened resumés based on a database of Amazon hires over 10 years. Because Amazon’s workforce was predominantly male, the algorithm learned to favor men over women. “The algorithm penalized resumés with the word ‘women’ in references to institutions or activities (things like ‘women’s team captain’),” Bergh says, noting “it took a preexisting bias and deployed it at scale.” To Amazon’s credit, once the issue was discovered, it retired the algorithm.

Perhaps the most serious complaint about algorithms used to make decisions that impact financial determinations is that there is little transparency around the factors used to make those decisions, how the various elements are weighted, and what impact specific changes in behavior will have on improving outcomes. This is particularly devastating to those on the bottom rungs of the economic ladder; people seeking basic financial or medical assistance, housing, or employment may feel the impact of biased algorithms disproportionally, since being “rejected” for a product or service may actually be factored into the next algorithm they encounter. It is impossible to tell what the actual impact may be, because most firms keep their algorithms relatively opaque, because providing a fully transparent and open formula could allow users to inappropriately “game” the system and alter the algorithm’s performance.

In 2019, Rep. Yvette Clarke (D-NY) introduced H.R.2231, the Algorithmic Accountability Act of 2019, which would direct the U.S. Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments. To date, no action has been taken on this bill.

Last year, the White House Office of Science and Technology Policy (OSTP) released a draft Guidance for Regulation of Artificial Intelligence Applications, which included 10 principles for agencies to consider when deciding whether and how to regulate AI. The draft noted the need for federal agencies that oversee AI applications in the private sector to consider “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.”

In the absence of laws or standards, companies may need to take the lead in assessing and modifying their algorithms to ensure the reduction or elimination of inherent or implicit biases. For example, Modern Hire, a provider of software used to streamline the hiring process via machine learning algorithms, has explicitly excluded certain elements from being used to assess a potential candidate during the hiring process, maintaining only those elements directly relevant to the position under consideration.

“The only things we score in the hiring process are things that the candidate consciously provides to us for use in the process,” explains Eric Sydell, executive vice president of innovation for Modern Hire. “For example, we may take the audio of what they’re saying [in an interview], and then we transcribe that into words. And then we score the words, and only the specific phrases and words that they actually verbalized.”

Sydell says although Modern Hire has the capability to score candidates’ tone of voice, accent, and whether they sound enthusiastic, they do not score such attributes because those assessments could contain unconscious or conscious bias. Furthermore, there is not enough scientific evidence that such scores are effective indicators of new hire success. Says Sydell, “The science isn’t advanced enough at this point to score those things in a way that [eliminates biases].”

Another key strategy for helping to remove or reduce algorithmic bias is to ensure the group of people developing the model are from diverse backgrounds and have diverse perspectives of the world. “That’s how you can actually fix and balance models, and then you can make sure that you have different genders, different ethnicities, and different cultural perspectives, which are very, very important when you’re doing your model development,” says Seth Siegel, North American Leader of Artificial Intelligence Consulting for IT consulting firm Infosys. “You can never manage out all bias in a model, but what you can do is say, ‘okay, we have a huge gap in our training data model over here, so let’s go invest [into addressing that]’.”

Still, relying on the traditional elements (credit scores, internal bank scores, past credit decision data, and the algorithms that tie this data together) used by landlords, banks, and other financial gatekeepers to assess an individual’s ability to pay rent, succeed at a job, or manage revolving debt, generally favors those who have managed to build up a successful track record of being assessed by those traditional institutions. Algorithms that assign more weight to the responsible use of traditional financial products and tools are likely to disproportionally impact people who are unbanked, disenfranchised, or otherwise outside of the financial mainstream, which often includes poorer people, minorities, or recent non-established immigrants.

“Today’s system may be fair for those inside it, but it is not inclusive,” says Naeem Siddiqi, senior advisor in the Risk Research and Quantitative Solutions division at business analytics software and services firm SAS. While Siddiqi has advocated for the use of alternative data to be incorporated into credit scoring models (such as historical utility payment data, rent payment data, or payments for things such as streaming services), he is not aware of any mainstream U.S. banks that do this at present.


“If you are waiting for an AI and machine learning model to tell you that, ‘Oh, you shouldn’t go do this’, it [won’t] happen.”


It is inconceivable that large credit bureaus and the customers that utilize them will simply throw out the algorithms they current use, Siddiqi says. “[Although] building new credit risk models is not a huge undertaking, the bigger challenge is acquiring adequate alternative data, while following all the requisite privacy rules and regulations.”

That said, Infosys’ Siegel says some people simply don’t have great credit scores from a corporate risk perspective and as a result, they are unlikely to be provided access to the top tier of financial products and services. Still, “Companies that can figure out how to serve different parts of our society make money,” Siegel says. “There’s an incredible number of unbanked people in the U.S. Financial institutions that have offered similar banking products across [different] socioeconomic [levels], they perform better.”

Siegel says increasing pressure on these organizations to eliminate biases likely will lead some companies to use algorithms that do not rely on metrics or indicators that may include bias when they roll out a new product or service. But this approach still presents a massive challenge.

When designing and using algorithms, it is virtually impossible to weed out all sources of bias, because humans are the designers, approvers, and users of algorithms, and humans themselves have inherent biases, implicit and explicit, that are hard to fully eliminate. That’s why David Sullivan, a data scientist at data science and AI consulting startup Valkyrie, has taken an approach to managing algorithmic bias that flies in the face of conventional wisdom. Sullivan says algorithms are constructed to find relationships between historical trends, and it is the data and history being encoded that contains the prejudice.

“A counterintuitive, yet effective, way to address this bias in the data is to include protected classes in the data used to develop the algorithm, so that the scientist can control for that factor,” Sullivan explains. “The intention of including the data on the protected classes is to allow the model to encode what portion of the historical trend being modeled is based on those protected classes, and then exclude that relationship when making predictions on new data. This gives the model an ability to measure the historical impact of prejudice based on those protected classes, and explicitly avoid making predictions that rely on statistics affected by that bias.”

Sullivan adds, “It is only by being thoughtful and observant with our own history of prejudice that we can overcome it; this is as true with machine learning as it is with our own behavior.”

Indeed, the impetus is on the humans who need to take the necessary steps to assess and, if necessary, adjust their algorithms. “You have to actually take conscious action; don’t let models make all your decisions,” Siegel says. “If you are waiting for an AI and machine learning model to tell you that, ‘Oh, you shouldn’t go do this,’ it [won’t] happen.”

*  Further Reading

Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms, Brookings Institution. May 22, 2019, https://brook.gs/3rVY4ew

O’Neil C.,
The Truth About Algorithms, Royal Society for arts, manufactures and commerce, October 17, 2018, https://www.youtube.com/watch?v=heQzqX35c9A

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More