News
Artificial Intelligence and Machine Learning News

The Dangers of Automating Social Programs

Is it possible to keep bias out of a social program driven by one or more algorithms?
Posted
  1. Article
  2. Author
wheelchair on crumbling path, illustration

Ask poverty attorney Joanna Green Brown for an example of a client who fell through the cracks and lost social services benefits they may have been eligible for because of a program driven by artificial intelligence (AI), and you will get an earful.

There was the “highly educated and capable” client who had had heart failure and was on a heart and lung transplant wait list. The questions he was presented in a Social Security benefits application “didn’t encapsulate his issue” and his child subsequently did not receive benefits.

“It’s almost impossible for an AI system to anticipate issues related to the nuance of timing,” Green Brown says.

Then there’s the client who had to apply for a Medicaid recertification, but misread a question and received a denial a month later. “Suddenly, Medicaid has ended and you’re not getting oxygen delivered. This happens to old people frequently,” she says.

Another client died of cancer that Green Brown says was preventable, but the woman did not know social service programs existed, did not have an education, and did not speak English. “I can’t say it was AI-related,” she notes, “but she didn’t use a computer, so how is she going to get access to services?”

Such cautionary tales illustrate what can happen when systems become automated, the human element is removed, and a person in need lacks a support system to help them navigate the murky waters of applying for government assistance programs like Social Security and Medicaid.

There are so many factors that go into an application or appeals process for social services that many people just give up, Green Brown says. They can also lose benefits when a line of questioning ends in the system, but which may not tell their whole story. “The art of actual conversation is what teases out information,” she says. A human can tell something isn’t right simply by observing a person for a few minutes; determining why they are uncomfortable, for example, and whether it is because they have a hearing problem, or a cognitive or psychological issue.

“The stakes are high when it comes to trying to save time and money versus trying to understand a person’s unique circumstances,” Green Brown says. “Data is great at understanding who the outliers are; it can show fraud and show a person isn’t necessarily getting all benefits they need, but it doesn’t necessarily mean it’s correct information, and it’s not always indicative of eligibility of benefits.”

There are well-documented examples of bias in automated systems used to provide guidelines in sentencing criminals, predicting the likelihood of someone committing a future crime, setting credit scores, and in facial recognition systems. As automated systems relying on AI and machine learning become more prevalent, the trick, of course, is finding a way to ensure they are neutral in their decision-making. Experts have mixed views on whether they can be.

AI-based technologies can undoubtedly play a positive role in helping human services agencies cut costs, significantly reduce labor, and deliver faster and better services. Yet taking the human element out of the equation can be dangerous, agrees the 2017 Deloitte report “AI-augmented human services: Using cognitive technologies to transform program delivery.”

“AI can augment the work of caseworkers by automating paperwork, while machine learning can help caseworkers know which cases need urgent attention. But ultimately, humans are the users of AI systems, and these systems should be designed with human needs in mind,” the report states. That means they first need to determine the biggest pain points for caseworkers, and the individuals and families they serve. Issues to factor in are what are the most complex processes; can they be simplified; what activities take the most time and whether they can be streamlined, the report suggests.

Use of these systems is in the early stages, but we can expect to see a growing number of government agencies implementing AI systems that can automate social services to reduce costs and speed up delivery of services, says James Hendler, director of the Rensselaer Institute for Data Exploration and Applications and one of the originators of the Semantic Web.

“There’s definitely a drive, as more people need social services, to bring in any kind of computing automation and obviously, AI and machine learning are offering some new opportunities in that space,” Hendler says.

One of the ways an AI system can be beneficial is in instances in which someone seeking benefits needs to access cross-agency information. For example, if someone is trying to determine whether they can get their parents into a government-funded senior living facility, there are myriad questions to answer. “The potential of AI and machine learning is figuring out how to get people to the right places to answer their questions, and it may require going to many places and piecing together information. AI can help you pull it together as one activity.”

One of the main, persistent problems these systems have, however, is inherent bias, because data is input by biased humans, experts say.

Just like “Murphy’s Law,” which states that “anything that could go wrong, will,” Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, says there’s a Murphy’s Law for AI: “It’s a law of unintended consequences, because a system looks at a vast range of possibilities and will find a very counterintuitive solution to a problem.”

“People struggle with their own biases, whether racist or sexist—or because they’re just plain hungry,” he says. “Research has shown that there are [judicial] sentencing differences based on the time of day.”

Machines fall short in that they have no “common sense,” so if a data error is input, it will continue to apply that error, Etzioni says. Likewise, if there is a pattern in the data that is objectionable because the data is from the past but is being used to create predictive models for the future, the machine will not override it.

“It won’t say, ‘this behavior is racist or sexist and we want to change that’; on the contrary, the behavior of the algorithm is to amplify behaviors found in the data,” he says. “Data codifies past biases.”

Because machine learning systems seek a signal or pattern in the data, “we need to be very careful in the application of these systems,” Etzioni says. “If we are careful, there’s a great potential benefit as well.”

To make AI and machine learning systems work appropriately, many cognitive technologies need to be trained and retrained, according to the Deloitte report. “They improve via deep learning methods as they interact with users. To make the most of their investments in AI, agencies should adopt an agile approach [with software systems], continuously testing and training their cognitive technologies.”

David Madras, a Ph.D. student and machine learning researcher at the University of Toronto (U of T), believes if an algorithm is not certain of something, rather than reach a conclusion, it should have the option to indicate uncertainty and defer to a human.

Madras and colleagues at U of T developed an algorithmic model that includes fairness. The definition of fairness they used for their model is based on “equalized odds,” which they found in a 2016 paper, “Equality of Opportunity in Supervised Learning,” by computer scientists from Google, the University of Chicago, and the University of Texas, Austin. According to that paper, Madras explains, “the model’s false positive and false negative rates should be equal for different groups (for example, divided by race). Intuitively, this means the types of mistakes should be the same for different types of people (there are mistakes that can advantage someone, and mistakes that can disadvantage someone).”

The U of T researchers wanted to examine the unintended side effects of machine learning in decision-making systems, since a lot of these models make assumptions that don’t always hold in practice. They felt it was important to consider the possibility that an algorithm could respond “I don’t know” or “pass,” which led them to think about the relationship between a model and its surrounding system.


“Humans are better than computers at exploring those grey areas around the edges of problems. Computers are better at the black-and-white decisions in the middle.”


“There is often an assumption in machine learning that the data is a representative sample, or that we know exactly what objective we want to optimize.” That has proven not to be the case in many decision problems, he says.

Madras acknowledges the difficulty of knowing how to add fairness to (or subtract unfairness from) an algorithm. “Firstly, unfairness can creep in at many points in the process, from problem definition, to data collection, to optimization, to user interaction.” Also, he adds, “Nobody has a great single definition of ‘fairness.’ It’s a very complex, context-specific idea [that] doesn’t lend itself easily to one-size-fits-all solutions.”

The definition they chose for their model could just as easily be replaced by another, he notes.

In terms of whether social services systems can be unbiased when the algorithm running them may have built-in biases, Madras says that when models learn from historical data, they will pick up any natural biases, which will be a factor in their decision-making.

“It’s also very difficult to make an algorithm unbiased when it is operating in a highly biased environment; especially when a model is learned from historical data, the tendency is to repeat those patterns in some sense,” Madras says.

Etzioni believes an AI system can be bias-free even when bias is input, although that is not an easy thing to achieve. An original algorithm tries to maximize consistency with data, he says, but that past data may not be the only criteria.

“If we can define a criterion and mathematically describe what it means to be free of bias, we can give that to the machine,” he says. “The challenge becomes describing formally or mathematically what bias means, and secondly, you have to have some adherence to the data. So there’s really a tension between consistency with the data, which is clearly desirable, and being bias-free.”

People are working so both consistency and being bias-free can be supported, he adds.

For AI to augment the work of government case workers and make social programs more efficient is to couple the technical progress being made with educating people on how to use these programs, Etzioni says.

“Part of the problem is when a human just blindly adheres to the recommendations of the system without trying to make sense of them, and the system says, ‘It must be true,’ but if the machine’s analysis is one output and a sophisticated person analyzes it, we find ourselves in the best of both worlds.”

AI, he says, really should stand for “augmented intelligence,” where technology plays a supporting role, he says.

“Humans are better than computers at exploring those grey areas around the edges of problems,” agrees Hendler. “Computers are better at the black-and-white decisions in the middle.”

The issue of transparency of algorithms and bias was discussed at a November 2017 conference held by the Paris-based Organization for Economic Cooperation and Development (OECD). Although several beneficial societal use-cases of AI were mentioned, researchers said the solution lies in addressing system bias from a policy perspective as well as a design perspective.

“Right now, AI is designed so as to optimize a given objective,” the researchers stated. “However, what we should be focusing on is designing AI that delivers results that are in line with peoples’ well-being. By observing human reactions to various outcomes, AI could learn through a technique called ‘cooperative inverse reinforcement learning’ what our preferences are, and then work towards producing results consistent with those preferences.”

AI systems need to be held accountable, says Alexandra Chouldechova, an assistant professor of statistics and public policy at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy.

“Systems fail to achieve their purported goals all the time,” Chouldechova notes. “The questions are: Why? Can it be fixed? Could it have been prevented in the first place?

“By being clear about a system’s intended purpose at the outset, transparent about its development and deployment, and proactive in anticipating its impact, we can hopefully reach a place where there will be fewer adverse unintended consequences.”

For the foreseeable future, Hendler believes humans and computers working together will outperform either one separately. For the partnership to work, a human must be able to understand the decision-making of the AI system, he says.

“We currently teach people to take the data and feed it into AI systems to get an ‘unbiased answer.’ That unbiased answer is used to make predictions and help people find services,” Hendler says. “The problem is, the data coming in has been chosen in various ways, and we don’t educate computer or data scientists how to know the data in your database will model the real world.”

This is certainly not a new problem. Hendler recalls the famous case of Stanislov Petrov, a Soviet lieutenant-colonel whose job was to monitor his country’s satellite system. In 1983, the computers sounded an alarm indicating the U.S. had launched nuclear missiles. Instead of launching a counterattack, Petrov felt something was wrong and refused; it turned out to be a computer malfunction. AI scientists, says Hendler, should learn from Petrov.

“The real danger is people over-trusting these ‘unbiased’ AI systems,” he says. “What I’m afraid of is most people don’t understand these issues … and just will trust the system the way they trust other computer systems. If they don’t know these systems have these limitations, they won’t be looking for the alternatives that humans are good at.”

*  Further Reading

Madras, D., Creager, E., Pitassi, T., and Zemel, R.
Learning Adversarially Fair and Transferable Representations, 17 Feb. 2018, Cornell University Library, https://arxiv.org/abs/1802.06309

Buolamwini, J. and Gebru, T.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research, 2018, Conference on Fairness, Accountability and Transparency. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Dovey Fishman, T., Eggers, W.D., and Kishnani, P.
AI-augmented human services: Using cognitive technologies to transform program delivery, Deloitte Insights, 2017, https://www2.deloitte.com/insights/us/en/industry/public-sector/artificial-intelligence-technologies-human-services-programs.html

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., and Chang, K..
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints, University of Virginia. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2979–2989 Copenhagen, Denmark, Sept. 7–11, 2017. https://pdfs.semanticscholar.org/566f/34fd344607693e490a636cdbf3b92f74f976.pdf?_ga=2.37177120.1400811332.1523294823-1569884054.1523294823

Tan, S., Caruana, R., Hooker, G., and Lou, Y.
Auditing Black-Box Models Using Transparent Model Distillation With Side Information, 17 Oct. 2017, Cornell University Library, https://arxiv.org/abs/1710.06169

O’eil, C.
Weapons of Math Destruction. 2016. Crown Random House.

Hardt, M., Price, E., and Srebro, N.
Equality of Opportunity in Supervised Learning October 11, 2016 https://arxiv.org/pdf/1610.02413.pdf

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More