BLOG@CACM
Artificial Intelligence and Machine Learning

Ethics and Equity in AI for Collaborative Learning

Posted
BLOG@CACM logo, 2400x1350, black border

Artificial Intelligence has made headlines in education this year, but mostly in a limited way: as a tool for individual use by students or teachers. Powerful teaching and learning, however, is not just individual, it’s social. We’ve long understood that technologies can make it easier for students to collaborate well. As we work towards AI support for collaborative learning, we’re finding that issues of ethics and equity quickly come to the fore.

Let’s start by illustrating the tensions. In collaboration, students should make their thinking visible by sharing their ideas with others.  A simple proxy measure for equity in remote collaborative learning could be the amount of talk time by each student. Another proxy measure (often used in brainstorming) is the number of distinct ideas generated. Existing processing techniques could compute these metrics. But a large body of computer-supported collaborative learning research informs us that such metrics over-simplify social learning. More appropriate metrics emphasize how each student experiences their degree of belonging to the group and how students build upon each other’s ideas, e.g., transactivity (Joshi & Rosé, 2007). We’ve found that belonging and knowledge building are equity- and ethics-laden.

We’ve been reflecting on these issues in two NSF-funded AI Institutes. Michael is a postdoc with the NSF Institute for Student-AI Teaming (iSAT) and is involved in building AI that supports small group collaboration in K-12 classrooms. Jeremy is a co-PI with the NSF Engage AI Institute, which is building AI where students collaborate in storyline-driven learning experiences, e.g. similar to solving a murder mystery by interviewing characters. We’ve both recently organized internal workshops (with outside experts including Marcelo Worsley and Karlyn Adams-Wiggins) around equitable and ethical collaboration. 

Here are three things we agree on.

Design AI features using realistic examples of students’ learning. Rather than starting from what AI could do (which can lead to oversimplifying where students and teachers need help), both institutes are starting from detailed accounts and scenarios for collaborative learning. In Michael’s workshop, researchers jointly watched a video of a group of students puzzling out why Mount Everest moves every year. The researchers noticed that good ideas sometimes are left on the floor; although the ideas are generated, the student group did not pay attention to –  or in some cases dismissed outright – some fruitful ideas. The reasons why include equity considerations (e.g., race, gender) that influence why some students are better listened to than others. In the workshop that Jeremy’s team hosted, teachers offered powerful examples of how AI technologies can give more "voice" to students with disabilities, by offering them more inclusive ways to listen, speak and be heard in a collaborative learning situation. Naive designs for AI in collaborative learning might assume all students are like imagined "typical" students, but in realistic collaborative learning situations, learner variability is always present and important to inclusiveness.

Design for both AI automated support and teacher awareness. Both institutes foresee collaborative learning environments in which AI supports automatic adaptivity. In Michael’s workshop, researchers articulated ways in which AI could notice important yet neglected ideas, and help students include those ideas as they build knowledge. At the same time, researchers pondered whether overlooking an idea might indicate a micro-aggression (Adams-Wiggins, 2020), and what to do if equity issues were blocking high-quality collaborative learning. In Jeremy’s workshop, teachers recognized they can’t be everywhere as students collaborate in small groups, and yet they insist that teachers remain "in the loop" if collaborative learning stalls in a particular group. Both Institutes believe that a teacher should be aware of what the AI is doing (e.g. inspectable, explainable interventions) and should have the power to override an automated response by the AI. On the other hand, both Institutes recognize that giving teachers full transparency into the AI’s actions might compromise youth’s expectations of privacy in small group collaborations. Both institutes find equity and ethics come to the fore as they contemplate tensions between automation and awareness.

Design for developing identity, not just knowledge. Both institutes acknowledge that one purpose of learning is to develop knowledge. Yet with respect to social learning, both also focus on how collaborative interactions help students understand their identities as STEM learners: how they belong, participate, and can grow towards long-term roles in STEM. Research tells us that social learning can support not only changes in knowledge, but also changes in a students’ perception of their place in a STEM field (Lave & Wenger, 1991). Identities develop more slowly than knowledge, and yet an accumulation of small mistakes in the design of an AI assistant for collaborative learning could lead to students concluding that they don’t belong in the field. Both institutes therefore argue that ethics and equity require longer-term research about how students use AI in social settings, not just quick A/B experiments. At the end of the day, AI is just one tool that can observe and react to a very limited set of events; realizing equitable collaborations in schools require us to interrogate and contest the infrastructures, practices, and beliefs that led to those inequities in the first place. 

Although our work is located within different Institutes, we both look forward to engaging with computer scientists and learning scientists broadly about how to design a future in which social learning is supported by AI and how to ensure that future is equitable and ethical. 

References

Adams-Wiggins, K. R. (2020). Whose meanings belong?: Marginality and the role of microexclusions in middle school inquiry science. Learning, Culture and Social Interaction, 24, 100353.

Joshi, M., & Rosé, C. P. (2007). Using transactivity in conversation for summarization of educational dialogue. In Workshop on Speech and Language Technology in Education.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press. https://doi.org/10.1017/CBO9780511815355

 

Michael Alan Chang is a postdoctoral researcher at the University of California, Berkeley and the NSF National Institute for Student-AI Teaming. Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More