News
Artificial Intelligence and Machine Learning

Can AI Become Conscious?

Posted
neuroscientist Christof Koch
"Sooner or later we will get machines that are at least as intelligent as humans are," says neuroscientist Christof Koch.

At the Allen Institute for Brain Science in Seattle, a large-scale effort is underway to understand how the 86 billion neurons in the human brain are connected. The aim is to produce a map of all the connections: the connectome. Scientists at the Institute are now reconstructing one cubic millimeter of a mouse brain, the most complex ever reconstructed. Mapping how exactly the brain is wired will help us to understand how healthy brains function, and what goes wrong in diseased brains.

Chief scientist and president of the Institute is neuroscientist Christof Koch. Together with the co-discoverer of DNA, Francis Crick, Koch pioneered the neurobiological study of consciousness. With neuroscientist and psychiatrist Giulio Tononi, Koch co-developed the Integrated Information Theory of consciousness, grounded in the mathematics of systems theory. In 2019, Koch published the book The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed.

If there is one scientist in the world who can shed light on the intriguing question of whether or not machines can become conscious, it is Koch.

What is the essence of the integrated information theory?

The theory fundamentally says that any physical system that has causal power onto itself is conscious. What do I mean by causal power? The firing of neurons in the brain that causes other neurons to fire a bit later is one example, but you can also think of a network of transistors on a computer chip: its momentary state is influenced by its immediate past state and it will, in turn, influence its future state. The more the current state of a system specifies its cause, the input, and its effect, the output, the more causal power the system has. Integrated information is a number that can be computed. The bigger the number for a system, the larger its integrated information, and the more conscious the system is.

Does the theory have practical consequences?

Yes, it does. The theory has given rise to the construction of a consciousness-meter that is being tested in various clinics in the U.S. and in Europe. The idea is to detect whether seriously brain-injured patients are conscious, or whether truly no one is home. Patients in a vegetative state lie in bed, are unable to voluntary move or speak, sometimes can't even move their eyes anymore, but the consciousness-meter tells us that about a fifth of them remain conscious, in line with brain-imaging experiments.

What are the philosophical consequences of the theory?

On a philosophical level, the theory says that consciousness is not unique to humans, but that any system with non-zero integrated information will feel like something. Take a bee, which has a million neurons. Our theory says that it feels something to be a bee. Not that it has a voice in its head or that it makes plans for the weekend, but when a bee flies to a flower and returns laden with pollen, it might feel something akin to pleasure. Other times, for example when it hasn't found food, it might feel bad. Consciousness is much more widespread than is typically assumed in Western culture.

Artificial intelligence has given machines superhuman abilities, like IBM's Watson or DeepMind's AlphaGo. What does your theory predict about whether or not such machines can become conscious?

Watson and AlphaGo have narrow AI. But, no doubt sooner or later we will get machines that are at least as intelligent as humans are. However, we have to distinguish intelligence from consciousness. Although intelligence and consciousness often go hand in hand in biological creatures, they are two conceptually very different things. Intelligence is about behavior. For example: what do you do in a new environment in order to survive? Consciousness is not about behavior; consciousness is about being.

Our theory says that if we want to decide whether or not a machine is conscious, we shouldn't look at the behavior of the machine, but at the actual substrate that has causal power. For present-day AI systems, that means we have to look at the level of the computer chip. Standard chips use the Von Neumann architecture, in which one transistor typically receives input from a couple of other transistors and projects also only to a couple of others. This is radically different from the causal mechanism in the brain, which is vastly more complex. You can compute that the causal power of Von Neumann-chips is minute. Any AI that runs on such a chip, however intelligent it might behave, will still not be conscious like a human brain.

Let's imagine that we simulate the brain in all biological details on a supercomputer. Will that supercomputer be conscious?

No. It doesn't matter whether the Von Neumann machine is running a weather simulation, playing poker, or simulating the human brain; its integrated information is minute. Consciousness is not about computation; it's a causal power associated with the physics of the system.

Are there other types of machines that can become conscious?

The theory predicts that if we create a machine with a very different type of architecture, it might become conscious. All it needs is a high degree of integrated information. Neuromorphic computers or quantum computers can, in principle, exhibit a much higher degree of integrated information. Maybe they will lead us to conscious machines."

As long as a computer behaves like a human, what does it matter whether or not it is conscious?

If I take my Tesla car and beat it up with a hammer, it's my right to do it. My neighbor might think that I am crazy, but it's my property. It's just a machine and I can do with it what I want. But if I beat up my dog, the police will come and arrest me. What is the difference? The dog can suffer, the dog is a conscious being; it has some rights. The Tesla is not a conscious being. But if machines at some point become conscious, then there will be ethical, legal, and political consequences. So, it matters a great deal whether or not a machine is conscious.

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More