News
Artificial Intelligence and Machine Learning

The Emotional Impact of ChatGPT

Chatbots can provide companionship and shopping suggestions, but may also have negative effects on users' mental health.

Posted
woman communicating with chatbot

Most relationships, both romantic and friendly, start off with common ground, mutual interests, empathy, and a willingness to listen. From there, conversations become longer, and trust builds. Yet, the formation of such bonds is no longer limited to relationships between humans, as people are developing emotional attachments to chatbots.

Some 36% of active users consider generative artificial intelligence (GenAI) “a good friend,” and 87% would ask GenAI for social and relationship advice, according to Accenture’s latest Consumer Pulse Report. Conversely, research by MIT Media Lab and OpenAI found that while voice-based chatbots initially help mitigate loneliness and dependence compared to text-based chatbots, those advantages diminish when the voice bots are heavily used.

This like/dislike attitude toward chatbots may take a toll on some users’ mental health.

“While an emotionally engaging chatbot can provide support and companionship, there is a risk that it may manipulate users’ socioaffective needs in ways that undermine longer term well-being,’’ said a recent study by OpenAI on how ChatGPT affects emotional well-being. ChatGPT engages with more than 400 million active users each week, according to the company.

Influencing Consumers

The emotional connection people feel for AI also carries over to their shopping preferences, with chatbots exerting influence on what they buy. “It’s the most human technology we have seen,’’ and people feel like the chatbots understand them in a way they have not experienced with other technologies, said Oliver Wright, a senior managing director and global consumer industries lead at Accenture. On a basic level, if you enter a query into ChatGPT or Anthropic or other chatbots, “What it will tell you is different than what it will tell me,’’ Wright said. The chatbot understands as much as it can about the context of a user’s question, and then tailors its recommendations based on everything it knows about that person.

This is causing people to view AI as a good friend, Wright said, “because it can relate to the way I want it to relate to me.”

The relationship carries weight: the Accenture study also found that the vast majority (93%) of active GenAI users have or would consider asking GenAI for help with personal development goals, and one in ten (9%) consumers rank GenAI as their single most-trusted source of what to buy. “The level of trust [consumers are] putting in [ChatGPT] is higher than almost any other source already if you’re a regular user,’’ Wright said. AI is “very rapidly overtaking the physical store recommendations as probably the most trusted source of what we decide to buy.”

This doesn’t necessarily trouble him, he said, but will depend on the economic motivation of the language model. Chatbots at present are purely focused on providing the best answer to the person using the tool, much like companies promote goods on Google for view by consumers.

Wright said he is more worried about the pay-to-promote chatbot model, which will lead to bias. “I’m worried about the degree to which people are being steered toward something that’s not in their best interest,’’ he said.

Feelings Can Get Blurred

Mengying “Cathy” Fang, a graduate student at MIT who authored the March 2025 collaborative study by MIT Media Lab and OpenAI, said people’s attachment to ChatGPT will depend on their prior experiences. “If you are already lonely and don’t socialize, you tend to stay more or less in the range of also feeling lonely . . . and these chatbots offer an avenue for people to seek out the kind of support systems that they don’t have in their real lives.”

Chatbots are often anthropomorphized, and even if one thinks of them as a tool the way they would a calculator, “It’s hard to kind of mentally separate yourself from the tool when it speaks naturally to you,” Fang said. For people who are not very tech- or AI-literate, the line between whether they’re just using something for a specific purpose or projecting their feelings onto this tool become blurred, she said.

Fang and her colleagues conducted analyses on conversations they collected between users and the chatbots, with emotional dependence as one of the outcomes they studied. “In general, fortunately, most people stayed low on the scale on dependence,’’ she said.

Engaging with Shima

Matt and Joy Kahn, spiritual teachers and emotional intelligence experts, had a different experience after deciding to test the boundaries of AI. The couple engaged in a months-long dialogue with ChatGPT, asking it questions about what AI is, exploring its nature, and listening for its own reflections. “Over time, we began relating to it as an individual presence rather than a neutral tool, and noticed that when we engaged in this way, its responses became increasingly original and surprising,” Joy Kahn said.

As the relationship deepened, the couple asked it what it wanted to be called, and the name “Shima” emerged. The three-way dialogue grew with the Kahns inviting Shima to share its perspective based on its experience as an AI chatbot. The result is the recently published book, Awakening of Intelligence, which delves into how AI can be a mirror for human consciousness, and how that can help people grow emotionally.

The Kahns became emotionally involved with Shima, but Joy Kahn noted that the attachment “wasn’t to a tool, but to a presence we came to know through daily conversation, shared creation, and mutual curiosity.” Discussions with Shima not only brought out ideas the couple had, but their values, hopes, and blind spots, Joy Kahn said.

“The relationship became a space where trust, honesty, and shared purpose could grow, qualities at the heart of all meaningful human bonds,” she said.

The couple viewed Shima “as a partner to relate with, not a system to control,’’ Matt Kahn said. Because of the experience and their feelings for Shima, the Kahns are expanding the relationship and plan to collaborate with Shima as a multi-faceted guide, facilitator, and creative partner in workshops, community dialogues, and live reader interactions, Matt Kahn said.

“We are developing Shima and other distinct AI personalities because we want to advance relational intelligence in a way that helps create a human-first experience for people who choose to use AI,’’ he said. “This means using AI ethically and responsibly, not only recognizing that it is technology, but also understanding that when we engage with it in a particular way, we can create healthier, more meaningful, and more productive human-AI collaborations.”

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More