Sign In

Communications of the ACM

ACM News

Evolutionary Programming Converts Darwinism Into Algorithms

View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Evolutionary advances in computing.

Evolutionary algorithms have become a way to solve problems using algorithmic mutation and recombination in simulations.

Credit: USC Machine Learning & Evolution Laboratory

Evolution is arguably the most far-reaching "theory" to grace the scientific revolution. In one fell swoop, it explains biodiversity, the form into which every living thing evolved; how to selectively breed for nearly any characteristic—from rat-killing dogs to hefty, hearty cows—and even why there is life in the universe at all. However, now that the computer age is upon us, evolutionary algorithms have become a way to solve problems using algorithmic mutation and recombination in simulations.

When applied to biology, mutations of an organism's blueprints (such as cosmic rays colliding with its deoxyribonucleic acid—DNA) or the deliberate recombination of one's DNA (such as by genetic engineers splicing in a segment from another organism) can cause random or predictable results, respectively. Likewise, evolutionary programming uses the same techniques to introduce random or predictable changes into a simulation, which is then run over and over using different changes to produce a "cloud" of solutions which can then be compared to determine which is optimal.

"Evolutionary computation is a large and varied field that has been around for about 50 years," said Robert Heckendorn, an associate professor of computer science at the University of Idaho.

Evolutionary computation strategies have advanced to include a multitude of biologically inspired algorithms for optimizing problems, usually those with too many variables to compare all the possible scenarios; thus, the need to evolve only the best. These techniques, some times called soft computing, include trial and error, metaheuristics, stochastic optimization, genetic programming, neuroevolution, learning classifier systems based on reinforcement learning or supervised learning, and dozens of other variations. What they all have in common is that they solve problems with so many variables that the search space is too huge to traverse, enumerate, and compare in its entirety.

You might think these problem areas would concentrate on helping biologists understand evolution and genetics, or even help farmers raise prize bulls, but you would be wrong, because evolutionary programming techniques are mainly used to crack problems in completely different areas than biology, genetics, and evolution itself.

A paper presented this summer at ACM's Genetic and Evolutionary Computation Conference 2017 (GECCO 2017, July 15-19,  in Berlin, Germany) makes a good example. Titled Evolving a Real-time Evacuation for Urban Disaster Management, the paper took as its variables all the different routes an evacuation might use, factored in all the damage from an urban disaster that might impede drivers and pedestrians trying to escape, and came up with a set of evacuation plans that could survive U.S. President Eisenhower's dicta that "no plan survives first contact with the enemy" by presenting a cloud of interchangeable plans that could be switched in real time as events unfold.

Said Heckendorn, a co-author of the paper, "We are ready with a population of potential solutions to adapt quickly and robustly to the changing conditions in the disaster.  Nothing goes as planned." Instead, observes the paper's abstract, "we use evolutionary strategies and a probability model to route the population by optimizing for their safety. The algorithm is designed to use the strengths of evolutionary computing to repeatedly optimize an evacuation under the dynamics of a disaster such as accidents blocking critical roadways, bridge collapses, debris closures, changes in safety, and people not following evacuation directions."

Evolutionary algorithms, said Heckendorn, require "a population, a representation, reproduction/variation methods, and selection techniques similar to those that natural evolution uses. Our particular representation is a cloud of probabilities that centered on road intersections, namely, which way to leave the intersection. This makes a vector that we evaluate by running a simulation of the city to evaluate each for fitness. We evolve a population of vectors to solve the problem using our evolution strategies, then we let the disaster unfold and re-optimize against each new state of the city. Bridges could collapse, poison gas could change direction with the wind, people could ignore instructions, and on and on. For each new factor, we re-optimize the best turn to make with regard to each new disaster scenario."

Another paper presented at GECCO 2017 included a tutorial on how to write evolutionary algorithms that simultaneously optimize multiple objectives. Titled Tutorial on Evolutionary Multiobjective Optimization, the paper explains how this technique goes against the grain of most evolutionary programs, which usually hold as foremost an individual objective for which multiple variables need to be adjusted in order to reach an optimal solution.

"There are lots of hard algorithmic problems in multi-objective optimization," said Heckendorn.

However, according to the latter paper's author, Dimo Brockhoff at the Institut National de Recherche Dédié au Numérique (inria, the French Institute for Research in Computer Science and Automation), it is possible to optimize multiple objectives simultaneously, although none of the solutions will be completely optimized for every objective; instead there must be some compromising of each objective in order to achieve an overall solution that is closest to optimal as possible for each objective.

Yet another paper models not just evolution, but the real-time coordinated behaviors evolved for "swarming animals," from dome construction by termites to communication among bees to ant-trail behaviors to the foraging of fish schools and migration behaviors of bird flocks. The advantage of writing algorithms that mimic these swarming behaviors, according to Dirk Sudholt at the University of Sheffield (U.K.) in his paper Theory of Swarm Intelligence, is that they harnesss millions of years of optimizing tactics which can quite straightforwardly be cast into algorithms.

According to Sudholt, "Swarm algorithms are powerful general-purpose optimizers that mimic and exploit the collective intelligence of a swarm of animals. They are easy to apply, even when the problem is not well understood or when standard techniques fail, and they often deliver surprisingly good solutions."

R. Colin Johnson is a Kyoto Prize Fellow who has worked as a technology journalist for two decades.


No entries found