The recent explosion of interest, hype, and fear about artificial intelligence, data science, machine learning, and robotics has focused a spotlight on software engineers. The business magnate Elon Musk has called for regulation and the President of Russia Vladimir Putin has declared that world domination will result from mastering AI. Are software engineers responsible for these outcomes? Here, I argue that software engineers have less control over their designs than they likely realize. Instead, software technologies are evolving in a Darwinian way, or more precisely, they are co-evolving with human culture.
In the field of software engineering, the term "evolution" has been used for gradual change of single, typically large, programs (see for example Lehman6). In this Viewpoint, I use "evolution" in a true Darwinian sense, where it refers to populations, not individuals, and it is driven by procreation, mutation, and survival. My claim is more radical than that individual programs change gradually. It is that populations of programs evolve along with the human cultures that use and develop them. To understand my interpretation requires looking beyond the software engineering literature.
The philosopher Daniel Dennett makes the case that the human mind, consciousness, language, and culture, are the result of a Darwinian evolution.3 He points to digital technology and software as a canonical example of an opposite kind of design, what he calls "top-down intelligent design" (TDID). He gives an elevator-controller example, observing that every contingency, every behavior of the system is imposed on it by a cognitive engineer. But as software systems go, an elevator controller is a simple one. For more complex software, like that in Wikipedia, Facebook, a banking system, or a smartphone, is it really correct to view them as TDID? In this Viewpoint, I argue that these systems are more like the cultural artifacts Dennett attributes to evolution than the results of deliberative cognition-driven synthesis. These systems evolved through decades-long iterative design revisions with many failures along the way. But more interestingly, they coevolved with the cultural artifacts (programming languages, tools, idioms, and practices) used to build software and with the culture around their usage.
Dennett elaborates the earlier controversial position posited by Richard Dawkins, who coined the term "memes" for cultural artifacts and ideas that propagate.2 Meant to sound like "genes," Dawkins argued that memes propagate via mutation and natural selection, where, in Dennett's words, "fitness means procreative prowess," and the source of mutations is human.
In my recent book, I expand upon the coevolution of humans and technology, not through genetic mutation over generations, but a faster and younger form of evolution like the cultural evolution of Dawkins and Dennett.5 I claim that technology itself should be viewed as a new class of replicators, members of a technospecies, that procreate and die. Technospecies evolve symbiotically with human cognition and culture, the memetic species of Dawkins and Dennett.
Humans today are strongly dependent on software systems, just as software systems are dependent on humans.
Humans today are strongly dependent on software systems, just as software systems are dependent on humans: What would happen to humanity if computerized banking systems suddenly failed? This is a classic symbiosis, where a technospecies individual that rewards humans gets rewarded in return by being nurtured and developed.
So what drives progress in software technologies? Certainly, TDID plays a role. Software is written with a purpose. But the purpose itself is shaped by previous software, by cultural forces (memes), and by what has been made possible by the available programming languages, libraries, and tools. Although most individual programs die and disappear, their effect on the next generation of programs through these background forces may be considerable. The survival and evolution of a technospecies, therefore, depends more on the ability of its individuals to procreate than on the ability of its individuals to survive. This feedback loop is accentuated by the very concrete benefits they afford to humans, for example by providing software engineers with income. When the software is successful, this income facilitates propagation and further evolution of the software.4 Is it possible that our mental cognitive processes when we write software are just cogs in a relentless evolution whose only "purpose" is procreation?
Dennett does notice coevolution in simpler technologies than software. If you will forgive my three levels of indirection, Dennett quotes Rogers and Ehrlich8 quoting Alain1 writing about fishing boats in Brittany: "Every boat is copied from another boat. ... Let's reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. ... One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others."
By analogy, perhaps it is teenagers who fashioned Snapchat, rather than software engineers. Dennett fails to see that software and boats are not so different: "To take the obvious recent example of such a phenomenon, the Internet is a very complex and costly artifact, intelligently designed and built for a most practical or vital purpose ..."
But this is an oversimplification of the Internet. ARPA funded the development of a few of the protocols that underlie the Internet, but even these protocols emerged from many failed experiments at methods for getting computers to interact with one another. ARPA had little to do with most of what we recognize as the Internet today, including Web pages, search engines, YouTube, and so forth. Much of the Internet evolved from the highly competitive entrepreneurial dog-eat-dog ecosystem of Silicon Valley. Further emphasizing the top-down nature of technology, Dennett says: "All of this computer R&D has been top-down intelligent design, of course, with extensive analysis of the problem spaces ... and guided by explicit applications of cost-benefit analysis ..."
Some software development has this character, for example in safety-critical systems such as elevators and aircraft control systems. But even these have evolved. Consider that overly prescriptive software development life-cycle models (such as waterfall, cleanroom) have evolved into more evolutionary models (including spiral, continuous integration, agile development).
The same is true of software.
Although Dennett overstates the amount of TDID in technospecies, human cognitive decision making strongly influences their evolution. At the hand of a human with a keyboard, software emerges. But this design is constructed in a context that has evolved, and if it is not beneficial to humans, it likely fails to propagate. Its context includes various artifacts of technical culture, such as human-designed programming languages that have themselves survived a Darwinian evolution and encode a way of thinking, and software components created and modified over years by others. The human is partly doing design and partly doing husbandry, "facilitating sex between software beings by recombining and mutating programs into new ones."5 So it seems that what we have is evolution facilitated by elements of TDID.
Is facilitated evolution still evolution? Approximately 540 million years ago, a relatively rapid burst of evolution called the Cambrian explosion produced a very large number of meta-zoan species over a relatively short period of about 20 million years. In 2003, Andrew Parker postulated the "Light Switch" theory, in which the evolution of eyes initiated the arms race that led to the explosion.7 Eyes made possible a facilitated evolution because they enabled predation. A predator facilitates the evolution of other species by killing many of them off, just as the sea "kills" boats. So facilitated evolution is still evolution.
Humans designing software are facilitators in the current Googleian Explosion of technospecies. It is proactive evolution, not just passive random mutation and dying due to lack of fitness. Instead, it mixes husbandry and predation with some elements of TDID.
How far can this coevolution go? How much smarter will humans with technology get? Dennett observes that our brains are limited, but, he says, "human brains have become equipped with add-ons, thinking tools by the thousands, that multiply our brains' cognitive powers by many orders of magnitude."
How concerned should we be that we are dumbing ourselves down by our reliance on intelligent machines?
Dennett cites language as a key tool. But Wikipedia and Google are also spectacular multipliers, greatly amplifying the effectiveness of language and of software engineering.
Dennett observes that collaboration between humans vastly exceeds the capabilities of any individual human. I argue that collaboration between humans and technology further multiplies this effect. Stack Overflow, Eclipse, Google, and countless open source components vastly enhance my own productivity writing software. Technology itself now occupies a niche in our (cultural) evolutionary ecosystem. Much like the bacteria in our gut, which facilitate digestion, technology facilitates thinking, and thereby facilitates the evolution of technology.
Dennett calls AI, particularly deep learning systems, "parasitic": "[D]eep learning (so far) discriminates but doesn't notice. That is, the flood of data a system takes in does not have relevance for the system except as more 'food' to 'digest'."
This limitation evaporates when these systems are viewed as symbiotic rather than parasitic. In Dennett's own words, "deep-learning machines are dependent on human understanding."
An understanding of these systems as symbiotic mitigates today's hand-wringing and angst about AI. Dennett raises one commonly expressed question: How concerned should we be that we are dumbing ourselves down by our growing reliance on intelligent machines?
Are we dumbing ourselves down? In the symbiotic view, AI should be viewed as IA: Intelligence Amplification. For the individual neurons in our brain, the flood of data they experience also has no relevance for the system except as more "food" to "digest." An AI that requires a human to give semantics to its outputs is performing a function much like the neurons in our brain, which also, by themselves, have nothing like comprehension. It is IA, not AI.
Symbiosis does not mean we are out of danger. Again, from Dennett: "The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will overestimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence."
In addition, doomsayers predict new technospecies will shed their symbiotic dependence on humans, making humans superfluous. Dennett's final words are more optimistic: "[I]f our future follows the trajectory of our pastsomething that is partly in our controlour artificial intelligences will continue to be dependent on us even as we become more warily dependent on them."
I share this optimism, but also recognize that rapid coevolution, which is most certainly happening, is dangerous to individuals. Rapid evolution requires death. Many technospecies will go extinct, and so will memetic species, including programming languages and careers.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
No entries found