BLOG@CACM
Artificial Intelligence and Machine Learning

AI in the Public Interest: Education and Democracy

Posted

"Democracy has to be reborn every generation, and education is its midwife"—John Dewey

Education, the most important public good in a democratic society, is increasingly seen as a private good—something that individual parents or students purchase for their own benefit. Artificial Intelligence (AI) has dramatic potential to accelerate this trend, especially under a policy regime that prioritizes mainly the pace of innovation and the competitiveness of the market. In particular, public educational services—like supporting students as they take a course, guiding them through college course selection, or tutoring their progress—may increasingly be procured by public educational institutions from private, for-profit AI companies. And thus public institutions may be further hollowed out, increasingly serving to pass public dollars through to private providers.

I’ve been reading the literature on AI ethics, with its wide variety of concepts such AI for social good, responsible AI, human-centered AI, and more. There is much strength in this literature as we seek AI for public good, but I am deeply worried about governance. The assumption that AI companies will self-organize to honor ethical concepts seems increasingly far-fetched. For example, in a recent post on LinkedIn, Shuki Cohen shared two surprising changes at OpenAI. OpenAI’s latest research paper has no authors, making public debate harder. And OpenAI is no longer open; key information about datasets and architecture is now hidden to protect competitive interests. I can’t entirely fault OpenAI; what’s the chance of self-regulation in a market driven by speed and competitive pressure?

Thus, I believe we need to start talking about governance. "AI in the Public Interest" is a phrase that can build on prior ethical work, but focus on how to organize a public body to protect and advance the broader interests of society. It shifts the focus from the concepts we need to the governing structure we need. Education is a great place to start moving towards AI in the Public Interest.

At the SxSWEdu conference, one of the most fascinating sessions was led by Native American scholars who are pursuing AI as a way to preserve their cultures and languages. By creating specific language models and augmented reality applications, they are pursing culturally sustaining learning technologies. An "Indigenous Protocol (IP)" guides their work, as described by Lewis & colleagues in a 2020 workshop report. The SxSWEdu session emphasized data governance, and their plans to structure governance of language data so tribes do not lose sovereignty of their own language.

The emphasis on data governance appears is a key argument in "Governing artificial intelligence in the public interest," a paper by scholars at Stanford and University College of London (UCL). Public institutions collect and house massive amounts of high-quality educational data, and to build sophisticated applications that serve education, this data will be needed. AI in the Public Interest could develop policies so public funding of data sets does not yield to free-for-all commercial exploitation, and uses of these education data can be heavily steered to achieving public good.

I am also reminded of the London Grid for Learning, which is an Internet Service Provider that operates in the public interest; it is governed as a consortium of 33 local educational authorities. Because it has public governance, the London Grid has strong opportunities both to protect student privacy and to provide high-quality educational content for its constituent school members. The Stanford/UCL paper also points to the power of public procurement to shape markets, and the London Grid is a great example. Individual school authorities have less clout alone than as a consortium to negotiate for AI applications that advance public good.

Yes indeed, democracy and education need to be reborn in an age of AI—and now is a critical time to vigorously pursue public governance of educational services and to avoid a slippery slide into a "private goods" view of how AI should be made available to students, teachers, and educational institutions.  

 

References

Dewey, J. (1980). The middle works, 1899-1924. SIU Press, p. 139

Lewis, Jason Edward, ed. 2020. Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu, Hawaiʻi: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR). https://spectrum.library.concordia.ca/986506

Mazzucato, M., Schaake, M., Krier, S. and Entsminger, J. (2022). Governing artificial intelligence in the public interest. UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP WP 2020-12). https://www.ucl.ac.uk/bartlett/public-purpose/wp2022-12.

 

Jeremy Roschelle is Executive Director of Learning Sciences Research at Digital Promise and a Fellow of the International Society of the Learning Sciences.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More