BLOG@CACM
Artificial Intelligence and Machine Learning

How Did Scientists Succumb to Aunt Edna? The Dangers of a Superintelligent AI is Fiction

Posted
Northeastern University Senior Research Scientist Walid Saba
Speaking of the existential threat of AI is science fiction, and bad science fiction for that matter because it is not based on anything we know about science, logic, and nothing that we even know about ourselves.

 

The Tale of Aunt Edna

When she first saw her daughter converse with a black device placed on her coffee table, asking it what the weather was like outside, and then proceeding to ask it to play some “soothing morning music,” Aunt Edna horrifically uttered “What? That black thing can talk and can do what you ask it to do? God help us! This is the end of the world!”[1].

Of course, it did not take Aunt Edna long to realize that that black device is just matching some sounds with predefined actions, and it was a little more advanced than automated phone menus – just like her husband realized around the same time that there are no fully autonomous vehicles and that they are just vehicles with advanced cruise control. So with that realization, all was OK after that with Aunt Edna. Until, that is, she started hearing of Artificial Intelligence on the news. What was worrisome for her was that the narrative escalated to talk about the potential danger of intelligent machines, and how a Superintelligent AI is an existential threat to humanity. This time around we could not comfort Aunt Edna, no matter how hard we tried. Listen, “people in government are worried and are questioning very smart people that work in this technology” she said. I knew that this time around we really had our work cut out for us. After all, it’s all over in the media, and cable news channels are interviewing the God fathers of this technology who are themselves afraid of how dangerous this technology is for humanity. To be honest, I almost gave up. I simply ran out of rabbits to pull out of my hat. It would seem that policy makers, respectable news outlets and even brilliant scientists have all succumbed to Aunt Edna. Instead of soothing her, these respectable sources have joined her. How could I reverse that?

So What Happened in AI?

Well, Aunt Edna is not being entirely irrational. After all, something not trivial has happened in AI. With the release of OpenAI’s ChatGPT, especially the one powered by the GPT-4 large language model (LLM), it has become apparent that LLMs have crossed some threshold of scale at which point there was an obvious qualitative improvement in their capabilities. These models can now generate human-like coherent language, can answer questions about virtually any subject, can generate code and realistic images, and even produce video games and virtual worlds at the request of a prompt given in plain natural language. It is without any doubt a monumental achievement – a scientific and engineering feat that Alan Turing and John McCarthy would marvel at. In fact, we believe linguists, psychologists, philosophers, and cognitive scientists must reflect deeply on this accomplishment since the technology demonstrated by these LLMs clearly nullifies many previously held opinions that questioned the possibility of machine intelligence (and many of these opinions were held by luminaries in cognitive science and the philosophy of mind). All that is true. What these LLMs have demonstrated is that a bottom-up reverse engineering of language at scale would tell us a lot about how language works. But, in our opinion that’s where the good news ends.

Despite their apparent success, LLMs are not (really) ‘models of language’ but are statistical models of the regularities found in linguistic communication. Models and theories should explain a phenomenon (e.g., F = ma) but LLMs are not explainable because explainability requires structured semantics and reversible compositionality that these models do not admit (see Saba, 2023 for more details). In fact, and due to the subsymbolic nature of LLMs, whatever ‘knowledge’ these models acquire about language will always be buried in billions of microfeatures (weights), none of which is meaningful on its own. In addition to the lack of explainability, LLMs will always generate biased and toxic language since they are susceptible to the biases and toxicity in their training data (Bender et. al., 2021). Moreover, and due to their statistical nature, these systems will never be trusted to decide on the “truthfulness” of the content they generate (Borji, 2023) – LLMs ingest text and they cannot decide which fragments of text are true and which are not. Note that none of these problematic issues are a function of scale but are paradigmatic issues that are a byproduct of the architecture of deep neural networks (DNNs) and their training procedures. Finally, and contrary to some misguided narrative, these LLMs do not have human-level understanding of language (for lack of space we do not discuss here the limitations of LLMs regarding their linguistic competence, but see this for some examples of problems related to intentionality and commonsense reasoning that these models will always have problems with). Our focus here is on the now popular theme of how dangerous these systems are to humanity.

What Can Potentially be Dangerous?

To begin with let us clarify our terminology. Clearly the danger to humanity that some “AI authorities” are currently speaking of is not danger or harm that can be caused by, let’s say, natural phenomena like hurricanes and earthquakes, nor is it harm or damage that we (humans) can cause by misusing some entity, say a rock or a knife or nuclear energy. The harm and potential danger to humanity that some are speaking of when it comes to AI is a function of “intelligence” – that rocks, knives, hurricanes, and nuclear energy don’t possess. In other words, it is a potential danger of a Superintelligent entity – or, an entity whose intelligence surpasses that of humans. But here’s where the problem lies, Aunt Edna. Superintelligent entities must have mental states that represent knowledge, belief and truth, and no entity we know of (as of now) can, like humans, contemplate the things it knows, or things it believes, or can identify truth from nontruth. We, on the other hand, not only have mental states, but we’re conscious of them (we can reflect on them). So we can be in a mental state where we contemplate what we believe, or what we think we believe, or what we know we believe, or what we believe we think, or what we think we believe, or what we know we believe, or even what we can know what we know, etc. We also have what logicians call BDI (beliefs, desires and intentions). In short, we are entities with intelligence, and we are conscious (aware) of our intelligence, and that’s why it is we (and not machines) that are potentially dangerous. But none of that is true in current “AI” (AI is in quotes here for the usual reasons!) These statistical predictors don’t have beliefs, desires and intentions, and thus they do not have mental states that is a prerequisite to deciding on being harmful.

For an entity to be potentially dangerous (and not just something that can cause damage or harm) the entity must be able to decide on achieving goal – say, the annihilation of humans. To achieve its goal G, it must devise a plan, P. The plan would be made of sub-goals, say g1, g2, …, gn. To know that it has achieved g1 (and is not stuck there), it has to know that whatever g1 entails, is now true (how else could it know it is done with g1). And that means it must be able to decide what is true and what is not, and these LLMs cannot decide on what is true and what is not. You’re with me, Aunt Edna? Here is the overall argument:

1.     Suppose E can cause harm:

  1. E has to devise a plan P to achieve goal G (G = cause harm).
  2. G has subgoals g1, g2, …, gn
  3. Before E moves to g2, E has to know g1 is accomplished
  4. To know that g1 is accomplished, it must know what is now true that was not before
  5. E cannot decide what is true and what is not, so (1) cannot be true

Even if I the passive statistical large language models can consciously decide on a plan, and even if we agree they can then devise a plan, these LLMs must be able to determine that it is moving forward in its plan and that means it must be able to determine what is now (or what has become) true based on the actions it is taking. But a passive statistical mesh cannot decide on what’s true outside (it doesn’t even know what is true inside, for that matter – the mental state of “I know” doesn’t even apply because it only computes multiplication and weighted addition). So, Aunt Edna, there you go – we are light years from a Superintelligent being that can cause our special intelligent kind any harm.

Do you see, Aunt Edna, entities like LLMs can never be harmful. We might be gullible to trust everything they tell us and that can cause us harm, but they on their own are harmless. So any talk of the existential threat of Superintelligent AI is silly.

So enjoy the news about “the potential danger of AI”. But watch and read this news like you’re watching a really funny sitcom. Make a nice drink (or a nice cup of tea), listen and smile. And then please, sleep well, because all is OK, no matter what some self-appointed god fathers say. They might know about LLMs, but they apparently never heard of BDIs.

Good night, Aunt Edna.


Walid Saba is Senior Research Scientist at the Institute for Experiential AI at Northeastern University. He has published over 45 articles on AI and NLP, including an award-winning paper at KI-2008.


[1] The nom de guerre ‘Aunt Edna’ is credited to the late Jerry Fodor who regularly used ‘Aunt Edna’ as reference to an average person (with average intelligence) and someone who possesses the basic commonsense knowledge that all humans have.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More