Sign In

Communications of the ACM

News

Algorithmic Poverty


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
user, UX, and AI icons, illustration

Credit: Shutterstock

"Life isn't fair" is perhaps one of the most frequently repeated philosophical statements passed down from generation to generation. In a world increasingly dominated by data, however, groups of people that have already been dealt an unfair hand may see themselves further disadvantaged through the use of algorithms to determine whether or not they qualify for employment, housing, or credit, among other basic needs for survival. In the past few years, more attention has been paid to algorithmic bias, but there is still debate about both what can be done to address the issue, as well as what should be done.

The use of an algorithm is not at issue; algorithms are essentially a set of instructions on how to complete a problem or task. Yet the lack of transparency surrounding the data and how it is weighed and used for decision making is a key concern, particularly when the algorithm's use may impact people in significant ways, often with no explanation as to why they have been deemed unqualified or unsuitable for a product, service, or opportunity.

"There are well-known cases of AI (artificial intelligence) and machine learning models institutionalizing preexisting bias," says Chris Bergh, CEO of DataKitchen, Inc., a DataOps consultancy. Bergh notes that in 2014, Amazon created an AI model that screened resumés based on a database of Amazon hires over 10 years. Because Amazon's workforce was predominantly male, the algorithm learned to favor men over women. "The algorithm penalized resumés with the word 'women' in references to institutions or activities (things like 'women's team captain')," Bergh says, noting "it took a preexisting bias and deployed it at scale." To Amazon's credit, once the issue was discovered, it retired the algorithm.

Perhaps the most serious complaint about algorithms used to make decisions that impact financial determinations is that there is little transparency around the factors used to make those decisions, how the various elements are weighted, and what impact specific changes in behavior will have on improving outcomes. This is particularly devastating to those on the bottom rungs of the economic ladder; people seeking basic financial or medical assistance, housing, or employment may feel the impact of biased algorithms disproportionally, since being "rejected" for a product or service may actually be factored into the next algorithm they encounter. It is impossible to tell what the actual impact may be, because most firms keep their algorithms relatively opaque, because providing a fully transparent and open formula could allow users to inappropriately "game" the system and alter the algorithm's performance.

In 2019, Rep. Yvette Clarke (D-NY) introduced H.R.2231, the Algorithmic Accountability Act of 2019, which would direct the U.S. Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments. To date, no action has been taken on this bill.

Last year, the White House Office of Science and Technology Policy (OSTP) released a draft Guidance for Regulation of Artificial Intelligence Applications, which included 10 principles for agencies to consider when deciding whether and how to regulate AI. The draft noted the need for federal agencies that oversee AI applications in the private sector to consider "issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes."

In the absence of laws or standards, companies may need to take the lead in assessing and modifying their algorithms to ensure the reduction or elimination of inherent or implicit biases. For example, Modern Hire, a provider of software used to streamline the hiring process via machine learning algorithms, has explicitly excluded certain elements from being used to assess a potential candidate during the hiring process, maintaining only those elements directly relevant to the position under consideration.

"The only things we score in the hiring process are things that the candidate consciously provides to us for use in the process," explains Eric Sydell, executive vice president of innovation for Modern Hire. "For example, we may take the audio of what they're saying [in an interview], and then we transcribe that into words. And then we score the words, and only the specific phrases and words that they actually verbalized."

Sydell says although Modern Hire has the capability to score candidates' tone of voice, accent, and whether they sound enthusiastic, they do not score such attributes because those assessments could contain unconscious or conscious bias. Furthermore, there is not enough scientific evidence that such scores are effective indicators of new hire success. Says Sydell, "The science isn't advanced enough at this point to score those things in a way that [eliminates biases]."

Another key strategy for helping to remove or reduce algorithmic bias is to ensure the group of people developing the model are from diverse backgrounds and have diverse perspectives of the world. "That's how you can actually fix and balance models, and then you can make sure that you have different genders, different ethnicities, and different cultural perspectives, which are very, very important when you're doing your model development," says Seth Siegel, North American Leader of Artificial Intelligence Consulting for IT consulting firm Infosys. "You can never manage out all bias in a model, but what you can do is say, 'okay, we have a huge gap in our training data model over here, so let's go invest [into addressing that]'."

Still, relying on the traditional elements (credit scores, internal bank scores, past credit decision data, and the algorithms that tie this data together) used by landlords, banks, and other financial gatekeepers to assess an individual's ability to pay rent, succeed at a job, or manage revolving debt, generally favors those who have managed to build up a successful track record of being assessed by those traditional institutions. Algorithms that assign more weight to the responsible use of traditional financial products and tools are likely to disproportionally impact people who are unbanked, disenfranchised, or otherwise outside of the financial mainstream, which often includes poorer people, minorities, or recent non-established immigrants.

"Today's system may be fair for those inside it, but it is not inclusive," says Naeem Siddiqi, senior advisor in the Risk Research and Quantitative Solutions division at business analytics software and services firm SAS. While Siddiqi has advocated for the use of alternative data to be incorporated into credit scoring models (such as historical utility payment data, rent payment data, or payments for things such as streaming services), he is not aware of any mainstream U.S. banks that do this at present.


"If you are waiting for an AI and machine learning model to tell you that, 'Oh, you shouldn't go do this', it [won't] happen."


It is inconceivable that large credit bureaus and the customers that utilize them will simply throw out the algorithms they current use, Siddiqi says. "[Although] building new credit risk models is not a huge undertaking, the bigger challenge is acquiring adequate alternative data, while following all the requisite privacy rules and regulations."

That said, Infosys' Siegel says some people simply don't have great credit scores from a corporate risk perspective and as a result, they are unlikely to be provided access to the top tier of financial products and services. Still, "Companies that can figure out how to serve different parts of our society make money," Siegel says. "There's an incredible number of unbanked people in the U.S. Financial institutions that have offered similar banking products across [different] socioeconomic [levels], they perform better."

Siegel says increasing pressure on these organizations to eliminate biases likely will lead some companies to use algorithms that do not rely on metrics or indicators that may include bias when they roll out a new product or service. But this approach still presents a massive challenge.

When designing and using algorithms, it is virtually impossible to weed out all sources of bias, because humans are the designers, approvers, and users of algorithms, and humans themselves have inherent biases, implicit and explicit, that are hard to fully eliminate. That's why David Sullivan, a data scientist at data science and AI consulting startup Valkyrie, has taken an approach to managing algorithmic bias that flies in the face of conventional wisdom. Sullivan says algorithms are constructed to find relationships between historical trends, and it is the data and history being encoded that contains the prejudice.

"A counterintuitive, yet effective, way to address this bias in the data is to include protected classes in the data used to develop the algorithm, so that the scientist can control for that factor," Sullivan explains. "The intention of including the data on the protected classes is to allow the model to encode what portion of the historical trend being modeled is based on those protected classes, and then exclude that relationship when making predictions on new data. This gives the model an ability to measure the historical impact of prejudice based on those protected classes, and explicitly avoid making predictions that rely on statistics affected by that bias."

Sullivan adds, "It is only by being thoughtful and observant with our own history of prejudice that we can overcome it; this is as true with machine learning as it is with our own behavior."

Indeed, the impetus is on the humans who need to take the necessary steps to assess and, if necessary, adjust their algorithms. "You have to actually take conscious action; don't let models make all your decisions," Siegel says. "If you are waiting for an AI and machine learning model to tell you that, 'Oh, you shouldn't go do this,' it [won't] happen."

* Further Reading

Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms, Brookings Institution. May 22, 2019, https://brook.gs/3rVY4ew

O'Neil C.,
The Truth About Algorithms, Royal Society for arts, manufactures and commerce, October 17, 2018, https://www.youtube.com/watch?v=heQzqX35c9A

Back to Top

Author

Keith Kirkpatrick is Principal of 4K Research & Consulting, LLC, based in New York City.


©2021 ACM  0001-0782/21/10

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


Comments


Joseph Bedard

This is a very good article. Thank you for writing it. I would like to add a few things.

The draft Guidance for Regulation of Artificial Intelligence Applications includes would require federal agencies to consider "issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes." In my opinion such regulation is unnecessary when companies already have incentive to avoid litigation under the Equal Credit Opportunity Act and the Fair Housing Act, which prohibit discrimination based on race, religion, nationality, etc. Companies can't simply blame an algorithm to prove themselves innocent. I'm not aware of any case where a company has used an algorithm as a get out of jail card. If there have been such cases or there is a compelling argument (supported by peer-reviewed empirical data) that those laws are not effective, then we could have a conversation about what additional laws are needed.

The article says, In the absence of laws or standards, companies may need to take the lead in assessing and modifying their algorithms ... For the sake of clarification, there is not an absence of laws or legal standards, as stated above.

I wholeheartedly agree that we should search for options to improve the lives of marginalized people in a way that does not unfairly degrade the lives of others. Innovative companies working on DeFi (decentralized finance) and crypto-currencies are doing exactly that. They are in the process of disrupting the existing financial services industry as you read this.


Keith Kirkpatrick

Thanks you for reading, and thank you for mentioning the draft Guiance for Regulation of Artificial Intelligence Applications. When it comes to fairness and discrimination, it's often a combination of solutions (regulation, pressure from marginalized groups, and market forces) that is most impactful in effecting change.


Joseph Bedard

I agree that pressure from and representation of marginalized groups is an important market force. Many companies have adapted to such opportunities and provided service to those marginalized communities. However, the fact that regulation has often affected positive change does not mean that regulation will always affect positive change in all situations.

There are undoubtedly situations where regulation is necessary. Clean air and water are good examples in which there are limited and shared resources that are susceptible to an economic tragedy of the commons. However, there are also situations where regulatory commissions are detrimental.

For example, the Interstate Commerce Commission (ICC) was originally formed to regulate railroads. After the public lost interest, the railroad industry gradually lobbied for favorable regulations and favorable appointments to the commission as years passed. When the trucking industry began to disrupt the railroad industry (providing lower cost shipping), the ICC gained broader authority under the Motor Carrier Act of 1935. The ICC (corrupted in favor of the railroads) interfered in the development of the trucking industry and prevented end consumers from benefiting from reduced shipping costs, as well as wasting tax-payer money. (This example is detailed in the book Basic Economics by Thomas Sowell, Fifth Edition page 158-159. I recommend that anyone advocating government regulation should read it.)

The moral of the story is that we should be careful what we wish for. I could see a similar situation develop where a commission is originally established to regulate AI algorithms, but then becomes corrupt and inhibits development of competing blockchain companies.

So, the question is whether regulation is necessary in the specific case of AI algorithms in specific industries for specific purposes. Aren't these situations in which advocates for marginalized groups can raise awareness so that entrepreneurs can pursue unmet market opportunities or existing companies can revise algorithms? Even this article admits that Amazon was able to identify and retire a problematic algorithm without the involvement of a government regulatory commission. Or, does anyone know of any examples where regulation has been successful in micro-managing how companies operate?


Keith Kirkpatrick

You raise some very good points about the unintended consequences of regulation. Some observers believe that when the largest players in a market support regulation, it provides them with a significant advantage over smaller competitors (look at Walmart supporting minimum wage increases, which can actually hurt small businesses that cannot compete).

I think the value of and approach to regulation will remain an open questions for years, given the complexities and competing factions likely to be involved.


Displaying all 4 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents:
  • Article
  • Author
  • ACM Resources