Architecture and Hardware Viewpoint

Biologically Uninspired Computer Science

Don't limit your inspiration to biology, today's favorite metaphor, when developing new architectures and systems.
  1. Introduction
  2. Human Vs. Machine
  3. Get Real
  4. References
  5. Author

Despite a steady flow of new challenges and an aging bundle of old ones, the fundamentals of computer science remain basically unchanged since the field’s earliest days in the 1940s. What does the future hold? Can we simply keep following the same paradigms? Should we instead rethink the way we build, program, organize, and interact with computers? To be able to answer, computer science needs "composite inspiration" from all scientific domains, not just from nature.

Imagine a gang of space aliens designing and manufacturing an intelligent artifact in some faraway galaxy. This machine, they think, would finally make it possible to decipher the mysterious symbols on that silly little disc attached to that spindly alien spacecraft they came across light-years ago near a distant star. From our earthly perspective, we might wonder from what source these extraterrestrial tinkerers would draw inspiration and guidance for their design approach? In the likely absence of extragalactical flora and fauna, and considering that Alan Turing is as unknown to them as an old-fashioned silicon transistor and Boolean logic, how might their machine help them solve their problem?

This extraterrestrial thought experiment leads to fundamental (and largely unanswered) questions about "intelligence," how to build artifacts that behave "intelligently," and what our available design options might be in light of the physical limitations we must obey back home on Earth.

Turing’s hope that "[…] machines will eventually compete with men in all purely intellectual fields" is far from accomplished. We have, for example, been promised artificial intelligence (AI) that is not generally concerned with biological plausibility. Chess, once the holy grail of AI, went out of fashion the moment a machine was able to compete with a human player. Suddenly we no longer viewed brute-force approaches as particularly intelligent. More recently, old-fashioned "brain dead" AI has been revived as the "new AI," despite its not-so-new content and not-so-new results.

Connectionism, once a rising star among scientists, has not lived up to its promise either. We are still just scratching the surface of how brains work and how one could artificially build one by drawing inspiration from their real counterparts.

In trying to design or evolve living systems from nonliving matter, the field of artificial life (Alife) has also failed. Biological organisms are constantly doing things no artifact can match. The syllogism of simple rules governing complex patterns—or more outlandishly, the whole universe—is seductive but oversimplified.

When Alife began to lose its momentum several years ago, biologically inspired (or nature-inspired) computer science became a buzzword and new ultimate design paradigm, whose broadly defined mission is—not unlike Alife—was to mimic instead of copy nature. The field of biologically inspired computer science is generally more concerned with solving real problems and building more powerful machines, unlike the Alife mantra of "discovering how life works by building it."

The syllogism of simple rules governing complex patterns—or more outlandishly, the whole universe—is seductive but oversimplified.

Trying to copy or mimic life or lifelike behavior in all scientific disciplines has generally produced disillusion after high initial hopes and hype. Frustration is typical. Rodney Brooks, director of the MIT Computer Science and Artificial Intelligence Laboratory and chief technology officer of iRobot, has rhetorically asked "What is going wrong?" [1], providing four possible answers:

  • The parameters of our models are wrong;
  • We are below some complexity threshold;
  • We lack computing power; and
  • We are missing something fundamental and unimagined.

Like most such articles (including this one), Brooks’s offered more food for thought than practical solutions.

Back to Top

Human Vs. Machine

The difference between human and machine and between information processing in nature and in our artifacts has stimulated countless articles, discussions, and controversy. We are far from resolving what might be the most promising path to making machines more lifelike. "Read this aloud and your inner ear, by itself, will be carrying out at least the equivalent of a billion floating-point operations per second, about the workload of a typical game console" [5]. Why the gap between human and machine performance? And how can we bridge it to produce smart artifacts that make our lives easier and help us face the upcoming (grand) challenges (such as human-like intelligence, synthetic life, functional genomics, materials engineering, and sustainable energy)?

The late Michael Conrad, a pioneer of biologically inspired computing and a professor at Wayne State University, Detroit, argued there is a fundamental trade-off principle in the "brain-machine disanalogy": No system, he wrote, can be at once highly structurally programmable, evolutionary efficient, and computationally efficient [2]. Programmability involves a price that nature does not have to pay.

Building computers is about hijacking the underlying material in order to make it do the things we want, whereas biological organisms co-evolved conjointly with highly dynamic and complex environments by exploiting the underlying physics in all its varied dimensions. What makes them smart is their ability to constantly adapt—something machines are not good at. But nature is not perfect, nor does good design have to resemble nature. Nature did not invent the wheel, and the wings of airplanes do not flap. Drawing an analogy between AI and artificial flight, [4] argued "[…] that the traditional view of the goal of AI—create a machine that can successfully imitate human behavior—is wrong." Bingo.

Copying is not cheating but is inherently difficult, if not impossible, in the case of nature. Imitating is not stealing but risks being inadequate, producing abstractions that are wrong and focusing on something that could more readily be solved differently. The Wright brothers could tell us a long story about flapping wings and why they should not have been inspired by them.

The whole point of biological inspiration is emobodied in the painting metaphor, whereby our hands do something more interesting when our eyes are looking at something than by our imaginations alone. Yet blind people sometimes see more than their sighted counterparts—which brings us back to the space-alien engineers and their options when building their smart artifact. Copying and mimicking their own brains are both unrealistic options because they do not know enough about them and do not have the right building materials. What is the right level of abstraction? What functionality is strictly necessary? How would one know, given an insufficient understanding of the original system, what to mimic? Alternatively, they might start from scratch and tackle the challenge more "blindly" with their own knowledge, tools, and methods, along with their own extraterrestrial creativity.

Back to Top

Get Real

Back on Earth, where gravity is the law, not just a good idea, the situation is more compelling. The Turing machine (TM) completely describes algorithmic problem-solving by computers, including its limits. No physical machine in our seemingly finite world was ever able to compute functions a TM could not compute—an idea commonly known as the Church-Turing thesis. Information is inherently physical, and computing’s innate agenda is physics, whereby "imagination and creativity" are given clear limits, as opposed to the virtual [3], whereby everything, including nonsense, can make sense. Consider Alice in Wonderland.

Researchers in the field of emerging and unconventional computing paradigms have long explored the use and abuse of the inherent properties of materials for computation. Do we want to hijack a given material for computation or simply build something new from scratch, so we are not forced to pay the price for programmability? Advances in both nanotechnology and synthetic biology have opened doors toward achieving the dream of building any desired material from scratch. Engineering a completely new and separate tree of (A)life thus comes increasingly within reach.

Why must we come up with the same solutions we find in nature if our computational building blocks are so different? Why mimic or copy a world we still hardly understand? Wearing the distorting glasses of biological inspiration might blind us to solutions we can more easily and efficiently engineer another way.

Progress toward closing the gap between nature and our artifacts depends on disruptive ideas (such as designing wings that do not flap). We are unlikely to achieve them by focusing solely on nature just because we do not know any better. Let’s all be a little blind to what nature offers us from time to time and be more inspired by being more bio-uninspired. Let’s focus more on our imaginations and on the artificial.

So, should we pursue biological inspiration? The question is misleading. I’m suggesting a much broader, more composite, and pragmatic consideration of all available "inspirations," including unconventional and novel paradigms, that need not be biological at all. We are experiencing a composite revolution, whereby the convergence of various sciences, along with related inspirations, is more likely to lead us to the destination we seek than any single one of them can. The future of computing looks bright, broad, and hybrid, at the common edge of bio-, nano-, neuro-, cognitive, and other sciences. What matters is whether we are able to build more lifelike machines.

Back to Top

Back to Top

    1. Brooks, R. The relationship between matter and life. Nature 409, 6818 (Jan. 2001), 409–410.

    2. Conrad, M. The brain-machine disanalogy. BioSystems 22, 3 (1989), 197–213.

    3. Crowcroft, J. On the nature of computing. Commun. ACM 48, 2 (Feb. 2005), 19–20.

    4. Ford, K. and Hayes, P. On computational wings: Rethinking the goals of artificial intelligence. Scientific American Special Issue (Exploring Intelligence) 9, 4 (Winter 1998), 78–83.

    5. Sarpeshkar, R. Brain power. IEEE Spectrum 43, 5 (May 2006), 24–29.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More