Neuroscience continues to make new discoveries about how the human brain works, leading optimists to believe the decoding of the brain's functions is just around the corner. With such an understanding, we could make robot brains as smart as ours, and maybe even achieve immortality by uploading one’s consciousness.
Unfortunately, that is all fanciful fiction, according to a group of neuroscientists who recently performed a unique experiment purportedly demonstrating the tools of neuroscience are nowhere near understanding how to cast consciousness into electronics. Using the most sophisticated neuroscience techniques available, they tried to understand one of the simplest microprocessors available, and failed. The conclusion: we are far from duplicating the brain with electronics.
The study, as reported in the paper Could a neuroscientist understand a microprocessor?, aimed to gauge the effectiveness of the tools of neuroscientists in determining the internal operations of a microchip by applying them to the man-made "brain" behind Donkey Kong, Space Invaders, and Pitfall—the seminal 6502 8-bit economy microprocessor built by now-defunct MOS Technology.
The study was an exercise in "reverse engineering," taking a finished product like a man-made microprocessor or a biological brain, and trying to tease out the functional principles that would enable us to build one from scratch. "There's tremendous potential in using synthetic systems for reverse engineering—our hope is to scale this analysis to much larger computing architectures, such as UC Berkeley's RISC-V," said Eric Jonas, a co-author of the study.
The paper explains the purpose of the study is to “use our ability to perform arbitrary experiments” on a “simulated classical microprocessor as a model organism,” to determine whether “popular data analysis methods from neuroscience can elucidate the way it processes information.” While the researchers find “interesting structure in the data,” that structure does “not meaningfully describe the hierarchy of information processing in the processor,” which “suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain."
The authors' ultimate goal is to find methods of analyzing real brains that will allow them to understand them as well as they do a microprocessor today, thus ultimately enabling humankind to build working electronic brains. The authors give credit to an earlier work, the 2002 paper Can a biologist fix a radio? by Russian scientist Yuri Lazebnik.
The researchers first pried open the 6502 chip and attached connections to monitor its operations, much like is done to monitor the neurons and synapses of real brains. In the end, they monitored 3,510 transistors and several other support circuits on the chip, ultimately generating about 1.6 GB per second of state information.
On first analysis, the authors found many similarities between chips and brains, such as that they both operate on a variety of time scales, have specialized modules organized hierarchically, route information, and retain memory.
There were also many differences noted that could have made it easier to use neuroscience to understand the chip, but ultimately did not. For instance, there were far fewer types of components inside the chip compared to the thousands of different types of neurons and synapses inside the brain.
The researchers identified some algorithmic and implementation similarities between the chip and the brain. However, using the principles of neuroscience alone provided them a much lower level of understanding that came nowhere near even the "sufficient" condition of "fixing" for declaring "understanding."
The neurological "lesion" method (in which neurons, or here, transistors, are individually disabled to determine what they do) failed to provide sufficient understanding of the algorithms underlying the chip’s operation. The researchers found that disabling 1,560 of the 3,510 transistors monitored would disable all games, while disabling 200 transistors would disable two games, and shutting off 186 transistors would disable a single game. However, they failed to find a single transistor that defined an individual game in any way.
Neuroscientists also have found that seeing and hearing simultaneously produces simultaneous activity in the visual and aural areas of the cortex, although there are many other synchronous activity patterns in the brain that are not yet understood. These patterns were also observed in the 6502 microprocessor, but because its functions are known, they were found to be unconnected at the functional level, suggesting that perhaps neuroscientists—and builders of deep learning neural networks—may also be making too much of synchronicity in the brain.
The researchers concluded that using standard data analysis techniques on the chip produces “results that are surprisingly similar to the results found about real brains.” However, they observed further, “in the case of the processor, we know its function and structure, and our results stayed well short of what we would call a satisfying understanding. …Unless our methods can deal with a simple processor, how could we expect them to work on our own brain?"
Said neural processing expert Michael Lewicki at Case Western Reserve University, "I agree with the point the authors are trying to make, but just because a group of neuroscientists couldn’t deduce the function of a microprocessor doesn’t mean that a whole field of researchers, working for decades, developing specialized tools and theories, won’t be able to figure out neural function in the brain. They’re not really comparable.
“That said, the spirit of the paper is that it’s very difficult to deduce the function of a circuit by inspecting its components. And as the authors say, more data doesn’t solve the problem: data is necessary, but it’s not sufficient. That’s why theoretical neuroscientists try to develop functional theories that predict detailed properties of the architecture and organization from deeper principles. It’s analogous to the days when astronomers were cataloging the motion of the planets. They had a lot of data, but no understanding of why they moved the way they did. Kepler devised the three laws of planetary motion, but it took Newton to conceive of concepts like gravity, force, and mass to explain it at a deeper level in terms of universal physical principles.
Concluded Lewicki, “The brain is vastly more complex, but there is still good evidence that it obeys deeper underlying principles. We just have a very limited understanding of what they are."
R. Colin Johnson is a Kyoto Prize Fellow who has worked as a technology journalist for two decades.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment