Giuseppi Riva first started to think about the role that artificial intelligence (AI) can play in human cognition when he and a colleague were trying to find someplace to have dinner in Los Angeles. Both pulled out their phones and started perusing Google Maps for suggestions of nearby restaurants. Riva quickly noticed that the list of possibilities on his phone was very different from what his companion was seeing.
“This map was customized according to my previous interaction with the tool, and was totally different from the picture that my colleague nearby me was looking at,” said Riva, professor of psychology and director of the Humane Technology Lab at the Catholic University of the Sacred Heart in Milan, Italy. “Artificial intelligence was providing our decision making with a simplified version of the world that was customized according to our needs.”
The observation got Riva thinking about how the growth of AI—in particular the large language models (LLMs) powering applications that provide personalized assistance—might be reshaping how humans think and adding a new layer to the cognitive processes that go into making decisions. Riva and his fellow researchers believe that AI constitutes a new layer of human cognition, which they label System 0.
The late psychologist Daniel Kahneman of Princeton University won the Nobel Prize in Economics in 2002 for his insights into human judgment and decision-making, drawing on research in cognitive psychology. Kahneman proposed that people have two basic ways of making choices. The first, labeled System 1, is fast and based on intuition and emotion. The second, System 2, is slower, more logical, and more deliberative.
Riva and his colleagues call AI System 0 because, they argue, it precedes the other two and “forms an artificial, non-biological underlying layer of distributed intelligence that interacts with and augments both intuitive and analytical thinking processes.” An LLM-powered agent can take a user’s request, whether for a nearby restaurant or a travel itinerary, and present the requester with a pared-down set of options, based on its knowledge of the user’s preferences and past behavior. The person who made the request can then apply System 1 and System 2 in making a final selection. That decision will not be based on the world as a whole, but on the curated picture of the world presented by AI. “This is a big change because agents will shape the way in which you will experience external reality,” Riva said.
The agents could take over some of the tasks of human assistants, in areas like app development, for instance. System 0 could perform some of the basic coding while the human offers feedback and corrections.
Such agents could improve people’s decision making, Riva argues, because they can sift through information that humans do not have the time or mental capacity to deal with. Because AI is able to notice subtle patterns that can elude humans, the agents might discover variables that make certain options more suitable for a particular user, even if the user is unaware of them. They can also avoid the subtle tricks that companies and advertisers use to nudge someone toward a particular decision, Riva said, like pushing certain results higher in a search.
On the other hand, the temptation to blindly trust the agents may be strong, and it may not be easy to tell if there is some undue influence going on. Building LLMs is expensive, and the risk Riva sees is that AI companies may accept money from advertisers to push users to a particular choice without informing them. Trust will be important for the agents to work. “You have to believe that the system will be able to help you,” Riva said. “Otherwise, you cannot use the system.”
The more critical a decision is, the more important it will be for users to find the right balance between trusting the agent and verifying its recommendations, he said. When people use a pocket calculator, they tend to trust the result, but they also know that they could do the calculation themselves. In situations where the calculation is too complex for a person, they have to rely on the accuracy of the machine. So it might be with LLM-based agents; you can search for all possible flight options yourself, but at some point it becomes so time-consuming that you will just cede the task to the AI.
Too much trust in System 0 also carries the risk that people might become less introspective, and defer too much to the computer’s way of viewing the world. “This shift could challenge our capacity for independent reasoning and critical thinking,” the authors write.
Extending the Mind
Tyler Brooke-Wilson, a professor of philosophy at Yale University’s Center for Neurocognition and Behavior, finds the concept interesting, but is not sure that “System 0” is the right name for such AI. He would rather call it something like System Beta, to emphasize that it has to do with actions happening outside of human cognition. Riva and his colleagues, however, write that they chose the term System 0 deliberately, “to emphasize its foundational and pervasive role in modern cognition.” That harkens back to the idea of the extended mind, proposed in 1998 by Andy Clark, a professor of cognitive philosophy at the U.K.’s University of Sussex, and David Chalmers, professor of philosophy and neural science at New York University.
Their notion, cited in Riva’s article, is that objects outside of the body can function as part of the human mind. “If I write something in a notebook, isn’t that essentially the same as a memory store that’s internal to my brain?” the theory asks, said Brooke-Wilson. While the extended mind theory challenges scientists to think about where the limits lie between the brain and the outside world, it has not had much impact on the practice of cognitive science, Brooke-Wilson said. “Very few people are now modeling people’s notebooks as an extension of their minds.”
Whether something represents a part of human cognition could come down to what happens if it is removed, said Julian Jara-Ettinger, a professor of psychology and computer science at Yale. If you took away System 2, humans would have a fundamentally different kind of mind, he said. Similarly, removing some external extensions, like a notebook, would alter human cognition. “If you remove those, it’s a more limiting form of our intelligence. Because we wouldn’t be able to invent rocket ships and all of these things if we didn’t have [objects] to write things down and work through them,” he said. “That would fundamentally change us.”
Whether the same is true for AI is less clear, Jara-Ettinger said. “If you think of all of the people in the world right now that are not using ChatGPT, are they really fundamentally that different?” he asked.
He notes that Riva’s article raises concerns about the erosion of critical thinking if people simply defer to machines. While that is an important issue to keep in mind, he points out that it is not a new worry. “That’s been a concern that’s been raised over and over every time there’s some kind of progress in humanity, and it hasn’t quite happened,” Jara-Ettinger said.
New Interactions
The concept of System 0 sparks questions on how to think about how AI systems and humans interact, said Elizabeth Churchill, founding department chair of Human Computer Interaction at the Mohamed Bin Zayed University of Artificial Intelligence in the United Arab Emirates, and a member of the ACM’s Special Interest Group in Computer–Human Interaction. “I like that System 0 idea, because it is before the fast and slow thinking. It is the pre-cognition. It is that assembling of sensory information by human computation,” she said.
System 0 could be analogous to the sensory information that can precede and affect what systems 1 and 2 do, Churchill said. Input from the body as to whether a person is in pain or hungry can influence the decisions that person makes. While some such information can be useful—if an action causes pain, it is probably best to avoid it—it can also lead to bad results, she said, citing the advice that one should never make decisions when hungry, angry, or tired.
Seeing sensory input as a metaphor for System 0 could lead to valuable results, Churchill said. She imagines, for example, a person with a broken toe being given a boot equipped with an AI agent. A person responding only to his own sensory input might walk on the side of his foot to avoid pain, but end up healing incorrectly and placing strain on his hip. An AI, trained on data from thousands of others with a similar fracture and tuned by physical therapists, could nudge the wearer to move in a way that promotes healing.
When it comes to trusting the system, Churchill points out that an AI agent will not provide a view of the whole world, just of the subset of information it has been trained on. System 0 could be useful if it could provide a list of the information sources it drew on and the warning that there might be other sources it did not see.
The concept of System 0 is a new one, and it remains to be seen whether it takes hold in either cognitive science of human-computer interaction. Churchill thinks it is promising, though, because the concept raises new questions about how machines and people can collaborate, in ways that might go beyond pushing buttons and looking at screens. She is interested to see where the idea of System 0 could lead. “I have no answers for you,” she said, “other than just it’s really exciting.”
Further Reading
- Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., and Riva, G.
The case for human–AI interaction as system 0 thinking, Nature Human Behavior, 2024, https://www.nature.com/articles/s41562-024-01995-5 - Human Cognition vs. AI; https://www.youtube.com/watch?v=o6IjS9_UV7I
- Jara-Ettinger, J.
Theory of mind as inverse reinforcement learning, Current Opinion in Behavioral Sciences, 2019, https://doi.org/10.1016/j.cobeha.2019.04.010 - Mitchell, M.
The metaphors of artificial intelligence, Science, 2024, https://doi.org/10.1126/science.adt6140 - Wang, H., Fu, T., Do, Y., et al.
Scientific discovery in the age of artificial intelligence, Nature, 2023, https://www.nature.com/articles/s41586-023-06221-2
Join the Discussion (0)
Become a Member or Sign In to Post a Comment