Sign In

Communications of the ACM

ACM Careers

AI Tool Guides Users Away from Incendiary Language

View as: Print Mobile App Share:
man's fists on a laptop computer

Users were responsive to the additional risk awareness provided by the algorithmic tool.

Credit: Getty Images

Cornell University researchers have developed an artificial intelligence tool that can track online conversations in real-time, detect when tensions are escalating, and nudge users away from using incendiary language.

The research shows promising signs that conversational forecasting methods within the field of natural language processing could prove useful in helping both moderators and users proactively lessen vitriol and maintain healthy, productive debate forums.

The work is detailed in two papers, "Thread With Caution," and "Proactive Moderation of Online Discussions," presented virtually at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW).

The first study suggests that AI-powered feedback can be effective in enhancing awareness of existing tension in conversations and guide a user toward language that elevates constructive debate, researchers say.

From Cornell Chronicle
View Full Article


No entries found