Large language models (LLMs) ingest enormous volumes of publicly available data—typically scraped from all corners of the Internet. They learn from books, articles, website text, song lyrics, and many other sources. The result is language output that seems genuinely human.
Less obvious is that once an LLM is fully operational, it begins to influence human behavior. As people turn to chatbots and other generative artificial intelligence (GenAI) systems to accomplish various language-related tasks, the algorithms systematically reshape words, thoughts, and actions.
Researchers at the Max Planck Institute in Germany recently found that humans mimic AI systems. Terms generated by LLMs like “delve,” “realm,” “bolster,” “underscore,” and “meticulous,” are increasingly found in human writing and daily conversations. Meanwhile, a group of researchers at Washington University discovered that humans consistently change their behavior when they interact with AI.
“The artifacts people use in their daily lives have always changed culture,” said Levin Brinkmann, a research scientist at the Center for Humans and Machines at the Max Planck Institute for Human Development. “What’s new is that the artifact is produced by the technology itself . . . and that allows greater fine-grained influence of humans.”
Terms and Conditions
History books offer plenty of examples of technologies that have shaped and reshaped human behavior. The printing press altered the way people distributed and consumed information. Electricity changed the structure and physical layout of cities. The Internet reinvented shopping, banking, work, and social interactions.
Yet LLMs interact with humans on a different level. They are more personal and intimate—and they directly connect with people through language. As a result, words and phrases that emanate from predictive text and chatbots—terms like “sounds good” or “let me check and get back to you”—increasingly appear in daily conversations. In some cases, LLMs can shape how people think about topics such as culture, morality, and ethics.
At some point, these complex feedback loops blur the line between human and machine thinking—including who is teaching whom. “Research shows that it’s possible to influence the vocabulary of large populations—potentially on a global scale. This shift in language can, in turn, reshape thinking, culture, and public discourse,” said Hiromu Yakura, a post-doctoral fellow at the Max Planck Institute for Human Development.
After analyzing 740,000 hours of conversations from YouTube talks and other audio sources—both before and after the introduction of ChatGPT—Yakura, Brinkmann, and other researchers detected a shift in behavior. People began using words and phrases frequently overused by AI—even on religious podcasts and in other niche areas.
“The study shows that there is a transformative effect. People unconsciously imitate and emulate others around them,” Brinkmann said. In other words, AI can shape which words, phrases, and concepts people use.
Culture Codes
AI’s influence extends beyond words. Nick Seaver, an associate professor of Anthropology at Tufts University, argues that recommendation algorithms limit what people see and how they act. His book Computing Taste: Algorithms and the Makers of Music Recommendation describes AI systems that steadily train humans to align to an algorithm by both amplifying and suppressing content. “The algorithms of recommendation are not passive observers of taste; they are active participants in its making,” Seaver writes.
Hannah Rose Kirk, a Ph.D. candidate at the University of Oxford Internet Institute, has found that AI can boost the desire to interact with an anthropomorphic system. Over time, this socioaffective alignment could alter a person’s preferences—and possibly their speech and behavioral patterns as well. “AI systems don’t just respond to preferences; they actively shape and influence our preferences over time,” she said.
In fact, human behavior changes significantly when people use AI, according to a study from a research group at Washington University in St. Louis, MO. Using the behavioral economic bargaining tool Ultimatum Game, they found that study participants who thought their actions would help train an AI system were more likely to reject an “unfair” payout—even when it came at a personal cost. The reason? They wanted to teach AI what’s fair.
“Simply introducing the idea of AI into the interaction was enough to change human behavior,” said Lauren Treiman, a Ph.D. student at Washington University and a lead researcher for the study. Chien-Ju Ho, an associate professor at Washington University and a co-author of the study, said participants didn’t just behave differently in the moment; they retained those behaviors later, even in the absence of AI. “The shift was habitual. The changes persisted over time,” Ho said.
All of this raises profound questions about AI’s influence on human cognition—not just passively, but as a force shaping moral reasoning, behavioral norms, and even the internal logic people use to make decisions.
Model Behavior
Social scientists have coined a term for this adaptation process: operant conditioning. When words, phrases, or ideas appeal to people or signal intelligence, fairness, or some form of social currency, people use them.
At this point, it isn’t entirely clear how LLMs will change language—or how people think and act. Researchers are sounding alarms, however. “Humans could lose language diversity,” Yakura said. The resulting negative feedback loop could lead to less-diverse training data—and a potential “core collapse” of language as humans and AI continually reinforce each other.
Another risk? AI-generated language can also help spread bias, misinformation, and narrow the way people think—including by design. Today, social media algorithms amplify and bury content to dial up user engagement. In the future, governments, political strategists, and others could tap AI-generated language to sway—and perhaps manipulate—public opinion.
AI researchers like Treiman, already uneasy about how little is known about the inner workings of most algorithms, are raising red flags. Secrecy, she argued, leaves the public in the dark about systems that increasingly shape daily life. “There is a need for far greater transparency,” she said. “People must know how these algorithms were created and any biases they have.”
The data used to train AI should reflect a diversity of human values, Kirk said. “If AI learns from the habits and preferences of a small or homogenous group of people, then it may pass these values onto some of the people who use it.” At the same time, systems should promote personalization and respect user autonomy. “AI must enhance rather than exploit our fundamental nature as social beings,” she added.
Of course, AI and humans influencing each other isn’t always a bad thing. In some cases, AI could “train” humans to communicate more clearly, concisely, and politely. It might also improve decision-making and sharpen critical thinking skills. “We expect a co-evolution,” Brinkmann said. “The AI needs to understand us, and we need to understand AI. In that shared cultural space, things naturally align.”
Samuel Greengard is an author and journalist based in West Linn, OR, USA.



Join the Discussion (0)
Become a Member or Sign In to Post a Comment