Opinion
Artificial Intelligence and Machine Learning

Empower Diversity in AI Development

Diversity practices that mitigate social biases from creeping into your AI.

Posted
human heads, illustration

We suggest that social biases are exacerbated by the lack of diversity in the artificial intelligence (AI) field.6 These biases cannot be effectively addressed by technical solutions that aim at mitigating biases stemming from data sources and data processing or from the algorithm itself.9 We argue that a social view—which has been neglected in AI development so far—is needed to address the root causes of some biases, given that AI systems are often reflections of our social structures. While great technical progress has been made in measuring and testing fairness4 and mitigating unfairness,1 biases may originate from any stage of AI development through the developers involved.6 As a result, some AI system biases reflect the social biases present within the AI developers that build them. Hence, we argue that the lack of diversity in AI development is a source of social biases. As a solution, we present a set of practical recommendations that empower organizations to increase diversity in AI development. In an online supplement (https://osf.io/854ce/), we also present prior work on AI development biases and bias mitigating and exacerbating practices.

Lacking Diversity in AI Development: A Source of Social Biases

We argue that a lack of diversity in AI development contributes to AI system biases in which individuals’ cognitively and affectively induced biases creep into the AI system. We call these social biases (see the accompanying sidebar for more information). AI developers with similar demographic backgrounds make similar (mis-)judgments, and hence, run the risk of codifying their social biases into an AI system that reinforces them. In contrast, we know that diversity is associated with positive outcomes.5 For example, cross-cultural diversity and gender diversity improve requirement specification, project performance, and innovation, and they reduce biases. Without a diverse team, AI development may focus only on certain design considerations and performance measures based on narrow value judgments without considering the shared values of the broader community and diverse stakeholders.3

However, benefiting from diversity is challenging because it requires the right mix of participants (for example, in hiring) and involves creating policies and procedures that help take advantage of diversity. Engaging only in shallow actions without making any meaningful changes, so-called diversity washing, will not address the fundamental problem and may even be counterproductive. Rather, empowered diversity goes beyond superficial or tokenistic efforts and encompasses a deep commitment to engaging AI developers from diverse backgrounds.

Empowering Diversity in AI Development

Empowering diversity benefits all levels of an organization, that is, it positively affects developers, teams, and the organization. However, empowering diversity in practice can be challenging in AI development, particularly in science, technology, engineering, and mathematics (STEM) fields, given the limited access to and opportunities for mobility and education by marginalized groups, for example, the long-lasting shortage of women graduates. Hence, we provide five practical recommendations that help organizations increase and empower diversity in AI development.

Cultivating diversity skills:  At the individual level, managers need to equip AI developers with a strong understanding of various social biases and their impacts. AI developers first need to acquire diversity as a skill before they change their behavior. Take confirmation biases, for instance, when AI developers work in a male-dominated environment, they may take this as a given and focus on confirming evidence.

Managers can cultivate diversity skills by training developers to recognize and avoid this cognitive trap, ensuring they do not neglect the different experiences of others. For example, AI developers can use specific methods such as GenderMag to identify potential biases related to gender in AI systems.2 GenderMag encompasses several practices including evaluating software features for potential gender biases, creating diverse user personas to understand how different genders interact with the software, uncovering biases in task flows and interactions by cognitive walkthroughs, collecting data on user demographics for inclusive design decisions, and testing with diverse groups. Using these tools helps AI developers to search for trade-off solutions that satisfy competing goals.7

In addition, managers need to promote interactions between different groups within the organization by ensuring everyone has equal status, sharing common goals, fostering cooperation, and providing institutional and social support.8 These intergroup contacts create a positive environment for learning from each other’s experiences, which in turn develops diversity skills among the team members.

Mirroring target stakeholders’ compositions:  At the team level, the compositions of the AI development team should mirror those of the system’s affected stakeholders to mitigate social biases. Take bounded awareness for instance, when the team lacks diversity in skin color, they may overlook the effects of facial recognition AI on different skin tones.

Managers can adjust HR practices to mirror the composition of target stakeholders. For instance, in diverse hiring, implementing blind hiring techniques helps counter bias influenced by bounded awareness. By combining diversity reporting with well-crafted diversity performance indicators, managers can measure the effectiveness of updated HR practices and demonstrate progress. This approach also raises awareness of potential implicit biases that are harder to crack.

Mirroring the target stakeholders’ composition not only improves team behavior by managing team abrasion and mitigating groupthink but also fosters innovation and creativity. Managers can use tools from psychology, for example, the Hermann Brain Dominance Instrument (see https://www.thinkherrmann.com/hbdi), to identify complementary profiles and modes of thinking. Diversity of thoughts and viewpoints improves collective creativity and helps the team become more innovative.

Promoting inclusive knowledge sharing through experiences:  At the organizational level, managers should be aware of the positive and negative implications of lacking empowered diversity. Take availability bias, for instance, organizations should encourage sharing both success stories and failures within and outside the organization because it helps prevent narrow and one-sided views of the workforce due to availability bias.

When managers facilitate knowledge exchange within their teams, they promote organizational understanding through shared experiences. For example, acknowledging negative experiences like gender discrimination reported by many women in AI and software development can raise awareness about existing biases and their impact on individuals’ career progression. Managers can further this by launching awareness campaigns and providing internal workshops, for example on emotional intelligence, to mitigate dominance and foster empathy within the organization.

Managers should also allocate at least an equal number of resources to facilitate the sharing of positive experiences. Identifying and cultivating success stories for their diverse audience showcases possibilities that are otherwise unrecognized. Managers should promote marginalized groups within the organization, because internal success stories are especially impactful. Speakers who serve as role models for meaningful change may offer insights into organizational processes and practices from marginalized perspectives. When internal success stories are limited, managers can also engage external speakers to share their perspectives and experiences.

Fostering long-term sustainability in a diverse talent pipeline:  One important challenge for organizations that want to empower diversity is the availability of talent within their environment. Therefore, managers must develop a sustainable talent pipeline that speaks to their diversity needs. Considering the lack of availability of female candidates as an example, given the pervasive nature of availability bias, which has raised concerns shared across different areas in the STEM field, developing suitable candidates requires sincere collective efforts with a long-term goal in mind.

Organizations should engage with their environment when developing a diverse workforce. Given the severity and longevity of the problem, organizations need to explore new ways of encouraging underrepresented groups to take on roles in AI development. The fact that established means through targeted online job advertisements are already biased only exacerbates the problem. While talent pools are limited and sometimes hard to access, organizations are encouraged to go where diversity is, that is, institutions of higher education. Organizations that collaborate more closely with educational institutions that serve a diverse population find it easier to develop a sustainable pipeline of diverse talent. For example, organizations often inform and sometimes co-develop educational curricula through their business needs as part of employer panels (for example, https://bit.ly/400eeXu). Communicating diversity as an important business need fuses diversity into the curricula development process.

In addition, organizations can engage in events that foster the growth and advancement of women in technology. For example, the Grace Hopper Celebration of Women in Computing focuses on empowering women to learn new skills, make connections, discuss innovative trends, and access motivational leaders.

While engagements with the environment can take some time to develop and evolve, organizations also need to harness and build on what they already have, for example, by offering opportunities for career advancement to retain diverse talent. Thus, organizations need to develop clear career paths for progression and promotion from within. Developing a positive culture for often underrepresented social groups positions the organization as an attractive employer. This signals that the organization does not see diversity as a goal in itself, but rather as a tool for accomplishing and delivering organizational objectives.

Establishing a diversity charter for AI development:  Finally, managers should develop an active agenda for proactive change. Executives can lead the charge by embracing existing government regulations, such as the U.S. Algorithmic Accountability Act of 2022 and the EU’s General Data Protection Regulation. The latter, for example, provides organizational stakeholders with a right to explanation and thus the ability to assess potential biases. Rather than reacting to these trends, regulations, and laws, managers should proactively embrace diversity and the changes that come with it. For example, managers can develop a diversity charter for AI development and establish task forces composed of employees from various demographic, ethnic, and other backgrounds, composed of representatives from different levels within the organization. Task forces and the charter help facilitate ongoing dialogue and action plans to continuously identify and address diversity challenges, while ensuring compliance with relevant regulations.

A proactive leadership mindset allows organizations to embrace diversity by relying on each employee’s unique strengths and skills to contribute to the organization’s overarching goals. Organizations that embrace this mindset and develop the corresponding implementation agenda are better positioned to develop responsible AI systems. For example, AI development organizations can embed diversity and inclusion principles into their AI development life cycle, ensuring that diverse perspectives are represented in the design, development, and deployment of AI systems to mitigate potential biases and promote fairness.

 

About the Authors

Karl Werder (), Department of Business IT, IT University of Copenhagen, Copenhagen, Denmark.

Lan Cao (), Information Technology & Decision Sciences, Old Dominion University, Norfolk, VA, USA.

Balasubramaniam Ramesh (), Computer Information Systems, Georgia State University, Atlanta, GA, USA.

Eun Hee Park (), Information Technology & Decision Sciences, Old Dominion University Norfolk, VA, USA.

Balasubramaniam Ramesh’s research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-23-2-0224. The views and conclusions contained in this Opinion column are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation herein.

    References

    • 1. Cruz, A.F. et al. Promoting fairness through hyperparameter optimization. In Proceedings of the 2021 IEEE Intern. Conf. on Data Mining (ICDM) (Dec. 2021); https://bit.ly/4f01RPs
    • 2. Guizani, M. et al. Gender inclusivity as a quality requirement: Practices and pitfalls. IEEE Software 37, 6 (Nov. 2020); https://bit.ly/3BE0cAA
    • 3. Karen Hao, K. This is how AI bias really happens—and why it’s so hard to fix. MIT Technology Rev., (2019); https://bit.ly/3Yia3F0
    • 4. Lalor, J.P. et al. Should fairness be a metric or a model? A model-based framework for assessing bias in machine learning pipelines. ACM Trans. Inf. Syst. (Mar. 2023); https://bit.ly/3BCAtIV
    • 5. Nilsson, F. Building a diverse company culture means empowering employees. Forbes  (Sept. 22, 2021); https://bit.ly/3U26lNg
    • 6. Nouri, S. Diversity and inclusion in AI. Forbes, (2019); https://bit.ly/405ght0
    • 7. Treude, C. and Hata, H. She elicits requirements and he tests: Software engineering gender bias in large language models. In Proceedings of the 2023 IEEE/ACM 20th Intern. Conf. on Mining Software Repositories (Mar. 17, 2023); https://bit.ly/3XXvl9v
    • 8. Wang, Y. and Zhang, M. Reducing implicit gender biases in software development: Does intergroup contact theory work? ESEC/FSE 2020—Proceedings of the 28th ACM Joint Meeting European Software Engineering Conf. and Symp. on the Foundations of Software Engineering 20, (Nov. 2020); https://bit.ly/3Ye4gAh
    • 9. Werder, K., Ramesh, B., and Zhang, R. Establishing data provenance for responsible artificial intelligence systems. ACM Trans. Manag. Inf. Syst. 13, 2 (June 2022); https://bit.ly/488zn3K

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More