Look around you, and you will be stunned by the work of evolution. According to Nobel Laureate Jacques Monod, a strange thing about evolution is that all educated persons think they understand it fairly well, and yet very few—if any, one may grumble—actually do. Understanding evolution is essential: "Nothing in biology makes sense except in the light of evolution," famously said the eminent 20th century biologist Theodosius Dobzhansky. And evolution is closer to home than black holes and other mysteries of science—it feels almost like your family history.
More so than most scientific fields, the theory of evolution has a sharp beginning: the publication of Charles Darwin's The Origin of Species in 1859.8 But of course nothing is that simple: during the first half of the 19th century, several scientists were convinced the diversity of life we see around us must be the result of evolution (see the sidebar on Charles Babbage and the accompanying figure). Darwin's immense contribution lies in three things: His identification of natural selection as the engine of evolution; his articulation of the common descent hypothesis, stating that different species came from common ancestors, and further implying that all life came from a common source; and the unparalleled force of argument with which he empowered his theory. But of course The Origin was far from the last word on the subject: Darwin knew nothing about genetics, and had no clue about the role of sex in evolution, among several important gaps. On the ultimate reason for sex, for instance, he wrote, "the whole subject is as yet hidden in darkness."
Mathematics has informed the theory early on. Mendel discovered genetics by appreciating mathematical patterns in the ratios of sibling pea plants exhibiting different characteristics and model building. When his laws were rediscovered 40 years later, their discrete nature was misunderstood as being at loggerheads with Darwin's continuous perception of traits. A deep scientific crisis raged for two decades, and was eventually resolved with the help of mathematics: discrete alleles (see the glossary) can result, through cumulative contributions, in continuous phenotypic traits. During the 1920s and the next two decades, R.A. Fisher, J.B.S. Haldane, and S. Wright developed mathematical equations for predicting how the genetic composition of a population changes over the generations. This mathematical theory of population genetics is introduced briefly in the sidebar "The Equation that Reconciled Darwin and Mendel." It is key to what is called the "modern evolutionary synthesis"—the 20th-century view of evolution, because it proposed one way of unifying Darwinism and Mendelism.
Not only is sex essentially universal, but it seems to be very much center stage in life, the basis of a fantastic variety of behavior and structure.
During the near-century since then, the study of evolution has flourished into a mature, comprehensive, and prestigious scientific discipline, while over the past two decades it has been inundated by a deluge of molecular data, a vast scientific gold mine that informs—and often challenges—its tenets. And yet, despite the towering accomplishments of modern evolutionary biology, there are several important questions that are beyond our current understanding:
As we recount in this article, recent joint research by computer scientists and biologists, bringing ideas and concepts from computation into biology, has made quite unexpected progress in these questions. Additional background and literature in the online appendix is available in the ACM Digital Library (dl.acm.org) under Source Material.
Over the past 70 years, computer scientists, starting with von Neumann,37 have been inspired and intrigued by evolution. During the 1950s, computer scientists working in optimization developed local search heuristics: Start with a random solution and repeat the following step: If there is a "mutation" of this solution that is better than the current one, change to that, until a local optimum is discovered. By "mutation" (much more often the word "neighbor" is used), we mean a solution differing from the present one in a very small number of features; in the traveling salesman problem, for example, a mutation could change two, or three, edges of the tour to form a new tour. This process is repeated many times from random starts, a stratagem that can be seen as a sequential way of maintaining a population (see Papadimitriou and Steiglitz32 for a survey on the 1980s).
This basic idea of local search was enhanced in the 1980s by a thermodynamic metaphor,20 to help the algorithm escape local optima and barriers: Simulated annealing, as this variant of local search is called, allows the adoption of even a disadvantageous mutation, albeit with a probability decreasing with the disadvantage, and with time. A further variant called go with the winners1 is closer to evolution in that it keeps a population of solutions, teleporting the individuals that are stuck at local optima to the more promising spots. Notice that all these heuristics are inspired by asexual evolution (no recombination between solutions happens); heuristics of this genre have been used successfully in many realms of practice, and there are several practically important hard problems, such as graph partitioning, for which such heuristics are competitive with the best known.
During the 1970s, a different family of heuristics called genetic algorithms was proposed by John Holland:14 A population of solutions evolves through mutations and recombination. Recombination is much more difficult to apply to optimization problems, because it pre-supposes a genetic code, mapping the features of the problem to recombinable genes (to see the difficulty, think of the traveling salesman problem: tours can mutate, but they cannot recombine). The evolutionary fitness of each individual solution in the population (that is, the number of children it will have) is proportional to the quantity being maximized in the underlying problem. After many generations, the population will presumably include some excellent solutions. Holland's idea had instant appeal and immense following, and by now there is a vast bibliography on the subject (see, for example, Mitchell28 and Goldberg12). The terms evolutionary algorithms and evolutionary computation are often used as rough synonyms of "genetic algorithms," but often they describe more general concepts, such as the very interesting algorithmic work — also categorized as research in artificial life—whose purpose is not to find good solutions to a practical optimization problem, but instead to understand evolution in nature by exposing novel, complex evolutionary phenomena in silico, for an example, see Jong.16
Coming back to the genetic algorithms, several successes in solving actual optimization problems have been reported in the literature, but the general impression seems to prevail that genetic algorithms are far less successful in practice than local search and simulating annealing. If this is true, it is quite remarkable—a great scientific mystery—because in nature the exact opposite happens: Recombination is successful and ubiquitous, while obligate asexual reproduction is extremely rare and struggling.
The authors started collaborating on a computational understanding of evolution in 2006, precisely in order to investigate this mystery, and we recount our findings in this article. At exactly the same year, Leslie Valiant first wrote on his theory of evolvability, another attempt at understanding evolution in computational terms.36 Valiant sees evolution as an approximation of an ideal fitness function by a polynomially large population of genotypes in polynomially many generations35 through learning by mutations. Notably, there is no recombination (sex) in his theory, even though it can be added for a modest advantage.17 Several natural classes of functions are evolvable in this sense; in fact, functions susceptible to a limited weak form of learning called statistical learning.18 In the section "A Game between Genes," we discuss another interesting connection between machine learning algorithms and evolution.
Sex is nearly universal in life: it occurs in animals and plants by the coming together of sperm and egg, in fungi by the fusion of hyphae, and even in bacteria:34 Two bacterial cells can pair up, for example, and build a bridge between them through which genes are transferred. Many species engage in asexual reproduction, or in selfing, at some times, but also engage in sexual reproduction at other times, keeping their genotypes well shuffled. In contrast, species that do not exchange genes in any form or manner, called "obligate asexuals," are extremely rare, inhabiting sparse, recent twigs of the tree of life, coming from sexual ancestral species that lost their sexuality, and heading toward eventual extinction without producing daughter species.22
Not only is sex essentially universal, but it seems to be very much center stage in life, the basis of a fantastic variety of behavior and structure: from bacterial conjugation to the intense molecular machinery of meiosis (cell division producing gametes), from flower coloration to bird courtship dances, from stag fights all the way to the drama of human passion, much of life seems to revolve around sex. So why? What role might sex play in evolution?
One common answer is that sex generates vast genetic diversity, and hence it must help evolution. But, just as sex puts together genetic combinations, it also breaks them down: a highly successful genotype will be absent from the next generation, as children inherit half their genes from each parent. To say the role of sex is to create particular, highly favorable genetic combinations is like watching a man catch fish only to toss them back to sea, and concluding that he wants to bring food to his family's table. (Incidentally, the designers of genetic algorithms are well aware of this downside of sex, and often allow the most successful individuals into the next generation, a stratagem known as "elitism," which however cannot be easily imitated by nature.)
Evolutionary theorists have labored for about a century to find other explanations for the role of sex in evolution, but all 20th century explanations are valid only under specific conditions, contradicting the prevalence of sex in nature.9,a
This is not a small problem. Imagine, for example, that even though much of the terrestrial world is green, we had no clue why leaves exist. That would have been a pretty big gap in our understanding of nature. Not knowing the role of sex is an even bigger gap, because far more life forms exchange genes than photosynthesize. It is no wonder the role of sex has been called "the queen of problems" in evolutionary biology.6
Since sex breaks down genetic combinations, it has been mainly thought in evolution that effective selection acts on individual alleles,38 that is, each (non-neutral) allele is either beneficial or detrimental on its own. According to this line of thinking, two main forces drive allele frequencies: selection acting on alleles as independent actors (where alleles are often assumed to be making additive contributions to fitness), and random genetic drift (chance sampling effects on allele frequency, as discussed previously). The interaction between alleles, within and between loci—even though it has been of interest in population genetics from the start10,39—has played a secondary role, often being treated as a mere correction to the above, under the term "epistasis." A few years ago, while working with biologists Marcus Feldman and Jonathan Dushoff and computer scientist Nicholas Pippenger, we asked whether interactions between alleles could be crucial to the understanding of the role of sex in a yet unexplored manner.23,24,25,26 Based on the standard equations used to describe how genotype frequencies change over generations (see the Darwin and Mendel sidebar), we demonstrated an important difference between sex and asex: In asexual evolution, the best combination of alleles always prevails. In the presence of sex, however, natural selection favors "mixable" alleles, those alleles that, even though they may not participate in any truly great genetic combinations, perform adequately across a wide variety of different combinations.23-26 To put it differently, in the hypothetical three-by-three fitness landscape in the sidebar, the winner of asexual evolution will be the largest entry of the fitness matrix (in this case, 1.05). In contrast, sexual evolution will favor, roughly speaking, those alleles (rows and columns) with larger average value; where "average" takes into account the prevalence of these genotypes in the population, as we will explore.b
One of the most central and striking themes of algorithms research in the past few decades has been the surprising power of randomization.29 Paradoxically, evoking chance is often the safest and most purposeful and effective way of solving a computational problem. For one, it helps avoid the worst case, as in Quicksort. Second, sampling from a distribution D helps decide between competing hypotheses about D: Randomized algorithms for software testing, as well as for testing primality, or validity of polynomial identities, are like this.
Evolution under sex can be seen as an instance of a randomized algorithm of the latter type. Suppose we want to design a hypothetical evolutionary experiment for determining whether a new allele of a particular gene performs better than its alternative, across all genetic combinations. If the population is asexual, this could be done by inserting this mutation in the genome of one individual, and gauging the lineage thus founded to see if it thrives. This kind of sampling is very inefficient, because we sample from a small pool (the genotypes that happen to be available in the population), and must repeat the insertion many times—in many individuals. But if the population is sexual, then by inserting the mutation once, after log n generations, where n is the number of genes with which this particular allele interacts,33 we will be sampling from all possible genetic combinations that could in principle be constructed. Sex enables evolution to sample quickly from the entire space of genetic combinations, in the distribution under which they appear in the population. What is more, evolution under sex not only decides among the competing hypotheses (which allele performs better), but also implements this decision (eventually, and with high probability, it will fix the winner).
Finally, for yet another take on explaining the ubiquity of sex in life, in terms and concepts familiar to computer scientists, view The Red-Blue Tree Theorem sidebar.
The search in population genetics for a quantity that is optimized by natural selection has a long history. Fisher wanted a theory for evolution with a mathematical law as clean and central as the second law of thermodynamics,10 while Wright pointed out that the frequency of an allele in a diploid locus changes in the direction that increases the population's mean fitness.40 Later, investigators tried their hand again at looking for a Lyapunov function that will describe evolution, albeit with little success.
Our search for an analytical maximization principle involving mixability ended with a surprise: We did not answer the question "What is evolution optimizing?" but, perhaps more interestingly, we identified the quantity that each gene seems to be optimizing during evolution under sex. Together with Erick Chastain and Umesh Vazirani,7 we focused on the standard equations described in the Darwin Mendel sidebar, in a particular evolutionary regime known as weak selection.30 Weak selection is the widely held assumption that fitness differences between genotypes are small. The fitness of a genotype g in this regime is written Fg = 1 + s Δg, where s is small and Δg is the differential fitness of the genotype, ranging in [− 1, 1].
Working with the weak selection assumption, and after some algebraic manipulation, we noticed the equations of the evolution of a population under sex are mathematically equivalent to a novel process, which entails an entirely different way of looking at evolution:
where mi is the expected differential fitness, positive or negative, of the i-th allele in the current gene pool. This quantity mi is a measure of what we have called the mixability of allele i, its ability to form fit combinations with alleles of other genes in the current genetic mix. To summarize, at each generation in sexual evolution, each gene boosts the frequency of each of its alleles by a factor that increases with the mixability of this allele in the current generation. Naturally, the quantities resulting from the equation are normalized appropriately so as to add to one.
This is a completely new way of looking at evolution. And it is a productive view, because it gets more interesting: Let us look back at the update Equation 1 and ask once again the question: This choice of the new probabilities for the alleles by the gene, is it optimizing something? For once, the answer is very clean: Yes, the choice of allele frequencies by the gene shown in Equation 1 optimizes the following function, specific to this gene, of the allele frequencies:
Here Mi denotes the cumulative relative fitness of allele i, that is, the sum of the mi in Equation 1 over all generations up to and including t − 1. It is easy to notice that Φ is a strictly concave function, and thus has one maximum, and this maximum can be checked by routine calculation to be exactly the new frequencies as updated in Equation 1! Now notice that the second term of Φ is plainly the entropy of the distribution x, a well-known measure of a distribution's diversity.
There is much that is unexpected and evocative here, but perhaps most surprising of all is that this radically new interpretation of evolution was lurking for almost a century so close to the surface of these well-trodden equations. That an algorithm as effective as MWU is involved in evolution under sex is also significant. It was pointed out in a commentary on our paper5 that the MWU is also present in asex. Indeed, asexual evolution can be trivially thought of as MWU helping nature select genotypes. However, our point is that in sexual evolution, the picture is far more sophisticated and organic, occurring deeper in the hierarchy of life: Individual genes interact, each "managing its investments in alleles" using MWU, in a context created by the other genes and, of course, by the environment.
The function Φ is a rare explicit optimization principle in evolution. The second term, and especially its large constant coefficient (recall that the selection strength s is small, and |mi|≤ 1) suggests that attention to diversity is an important ingredient of this mechanism, a remark that may be relevant to the question of how genetic diversity is maintained. But there is a mystery in the non-diversity terms: the cumulative nature of the fitness coefficients M suggests that performance during any previous generation is as important as the current generation for the determination of the genetic make-up of the next generation. How can this be?
This surprising connection between evolution and algorithms, through game theory and machine learning, as well as the maximization principle Φ, are very recent, and their full interpretation is a work in progress.
The insights about sex as we have discussed shed some light on the mystery of genetic algorithms, as explained previously. There is a mismatch between heuristics and evolution. Heuristics should strive to create populations that contain outstanding individuals. In contrast, evolution under sex seems to excel at something markedly different: at creating a "good population." So, it is small wonder that genetic algorithms are not the best heuristics around. On the other hand, these insights also suggest that genetic algorithms may be valuable when the robustness of solutions is sought, or when the true objective is unknown or uncertain or subject to change.
What is a mutation? Point mutations (changes in a single base such as an A turning into a C) are only part of the story, as many important mutations are rearrangements of small stretches as well as large swaths of DNA: duplications, deletions, insertions, inversions, among others.13 For a long time it has been believed that mutations are the results of accidents such as radiation damage or replication error. But by now we have a deluge of evidence pointing to involved biological mechanisms that bring about and affect mutations.22
We know, for example, that the chance of a mutation varies from one region of the genome to another and is affected by both local and remote DNA.22 Nearly a quarter of all point mutations in humans happen at a C base which comes before a G after that C is chemically modified (methylated);11 methylation is know to be the result of complex enzymatic processes. As to rearrangement mutations, there are powerful agents of mutagenicity (creation of mutations) in the genome, such as transposable elements: DNA sequences prone to "jump" from one place of the genome to another, carrying other DNA sequences with them.13 A key step in mammalian pregnancy (decidualization), for instance, was the result of massive evolutionary rewiring of about 1,500 genes mediated in part by transposable elements.27 Furthermore, the genetic sequences that participate in a rearrangement mutation are likely to be functionally related, since they are likely to be close to each other in 3D space and to bear sequence similarity, both of which allow interaction through recombination-based mutational mechanisms (see Livnat22 and references therein). Indeed, the same machinery that effects sexual recombination is also involved in mutations, and in fact produces different types of rearrangement mutations, depending on the genetic sequences that are present.13 Finally, different human populations undergo different kinds of mutations resulting in the same favorable effect, such as malaria resistance, suggesting that genetic differences between populations cause differences in mutation origination.22
There is a mismatch between heuristics and evolution. Heuristics should strive to create populations that contain outstanding individuals. Evolution under sex seems to excel at something markedly different: creating a "good population."
The idea that mutations may be non-accidental is still confronted with suspicion due to the legacy of Lamarck, who believed around 1800 that organisms can sense, through interaction with the environment, what is needed for an evolutionary improvement, and are able to make the correct heritable change. Since in the light of modern biology this seems impossible—to a computer scientist it sounds like reversing a one-way function, or a hash function—the accidental mutation notion prevailed. But in science one must not assume that the only relevant alternatives are the familiar, inside-the-box ones. Mutations are random—but it may be more productive to think of them as random in the same way that the outputs of randomized algorithms are random. Indeed, mutations are biological processes, and as such they must be affected by the interactions between genes. This new conception of heredity is exciting, because it creates an image of evolution that is even more explicitly algorithmic. It also means that genes interacting in one organism can leave hereditary effects on the organism's offspring.22 It no longer matters that a lucky genetic combination created by sex is doomed to vanish from the face of the earth (that the fisherman of our earlier metaphor throws the fish away): It may have achieved a lasting effect on the population through mutagenicity. Finally, the biological mechanisms affecting mutations may themselves evolve.
If there is one idea that permeates all the various aspects of computational thinking about evolution, as explained in the past sections, it is this: Interactions between genes are crucial for understanding evolution. Gene interactions also come up in our recent work on the following question: How is variation within a species preserved?
Classic data indicates that, for a large variety of plants and animals taken together, the percentage of protein-coding loci that are polymorphic (in the sense that more than one protein variant exists that appears in more than 1% of the individuals in a population), and the percentage of such loci that are heterozygous in an individual, average around 30% and 7% respectively.31 Such a number is far greater than could be explained by traditional selection-based theories.21 Genetic variation is fed by mutations, and, according to the equations of populations genetics, it is decreased by fixation, the eventual triumph of one allele and the extinction of all other alleles of the same gene. In much of the discourse on the subject, selection is assumed to act on individual genes, and therefore fitness is additive. The equations tell us that fixation will happen after generations, where Δ is the differ- ence in relative fitness between the most fit and the second most fit allele. Fixation looks rather speedy.
In the spring of 2014, the Simons institute brought to Berkeley 60 biologists and computer scientists to exchange ideas on evolution. It was during that time the authors, with Costis Daskalakis and Albert Wu, explored how our understanding of the speed of fixation would be affected if one takes into account gene interactions. We approached the subject through some decades-old work on the complexity of local search;15 this theory examines how difficult it is for a local search process to reach a local optimum, and the conclusion has in many cases been: "pretty hard." By applying this point of view to selection on interacting genes, we showed that there are n-gene systems, in which the fitness is the sum total of contributions of certain pairs of alleles—that is, the next step beyond selection on single alleles—for which fixation takes a number of generations proportional to 2n to happen. A stronger result can also be obtained under the well-accepted complexity assumption that local search is intractable in general (see Johnson et al.15 for details). The implication is that, if gene interactions are taken into account, fixation may take much longer than in the regime of selection on individual genes.
Does this insight explain the mystery of variation? Not yet, because our analysis so far has been disregarding two other powerful forces in evolution, besides mutation and selection, acting on variation: the finiteness of the population, and heterozygosity (a diploid organism carrying two different alleles of a gene.).
First, finiteness. Because the number of individuals carrying the alleles in question is finite, say N, the number of individuals carrying each allele at each generation evolves as in a kind of a random walk within the confines of [0, N], and, ignoring selection, this results in fixation after O(N) generations.19 Second, diploidy introduces the possibility of overdominance, in which organisms with two different alleles of a gene are more fit than organisms with two copies of one allele or two copies of the other. In overdominance, the equations of selection point to stable variation, with both alleles enjoying stably high frequency in the population.
How these three effects, of finite population, of heterozygosity, and of selection acting on combinations of alleles across loci, interact with one another is an important subject for further research.
A computer scientist marvels at the brilliant ways in which evolution has achieved so much: Systems with remarkable resource efficiency, reliability and survivability, adaptability to exogenous circumstances, let alone ingenious and pristine solutions to difficult problems such as communication, cooperation, vision, locomotion, and reasoning, among so many more. One is tempted to ask: What algorithm could create all this in just 1012 steps? The number 1012—one trillion—comes up because this is believed to be the number of generations since the dawn of life 3.5 · 109 years ago (notice that most of our ancestors could not have lived for much more than a day). And it is not a huge number: cellphone processors do many more steps in an hour.
Over the past decade, computer scientists and evolutionary biologists working together have come up with new insights about central open problems surrounding evolution—including, rather surprisingly, a proposed answer to the "algorithm" question—by looking at evolution from a computational point of view. And, of course many more questions, inviting similar investigation, were opened up in the process.
21. Lewontin, R.C. and Hubby, J.L. A molecular approach to the study of genic heterozygosity in natural populations; amount of variation and degree of heterozygosity in natural populations of Drosophila pseudoobscura. Genetics 54 (1966), 595–609.
27. Lynch, V.J., Leclerc, R.D., May, G. and Wagner, G.P. Transposon-mediated rewiring of gene regulatory networks contributed to the evolution of pregnancy in mammals. Nature Genetics 43 (2011), 1154–1159.
a. See the appendix available in the ACM Digital Library (dl.acm.org) under Source Material for a more extensive bibliography on this and other subjects.
b. The original paper24 refers to the unweighted average fitness as mixability, instead of the more natural average weighted by genotype frequency.
Additional background and literature appears in an online appendix available with this article in the ACM Digital Library (dl.acm.org) under Source Material.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
The following letters were published in the Letters to the Editor in the February 2017 CACM (http://cacm.acm.org/magazines/2017/2/212426).
In "Sex as an Algorithm: The Theory of Evolution Under the Lens of Computation" (Nov. 2016), Adi Livnat and Christos Papadimitriou argued eloquently that the extraordinary success of sexual evolution has not been adequately explained. Somewhat paradoxically, they concluded that sex is not particularly well suited to the task of generating "outstanding individuals." They also said that genetic algorithms are similarly ill suited to this task.
It should be noted that this critique of genetic algorithms — widely used derivative free optimization heuristics modeled on recombinative evolution — stands in counterpoint to a voluminous empirical record of practical successes. It also speaks to the long-standing absence of consensus among evolutionary computation theorists regarding the abstract workings of genetic algorithms and the general conditions under which genetic algorithms outperform local search. A consensus on these matters promises to shed light on the question the authors originally aimed to answer: Why does recombinative evolution generate populations with outstanding individuals?
Generative hypomixability elimination(1) is a recent theory that addresses this question, positing that genetic algorithms efficiently implement a decimation heuristic that generates fitter populations over time by iteratively eliminating the joint entropy of small collections of "hypomixable loci," or loci in which alleles do not mix well. Recombination, or mixing, allows such loci to go to fixation even as it safeguards the marginal entropy of non-interacting loci.
Taking a step back, one might ask how this theory and the theory proposed by Livnat and Papadimitriou might be evaluated. Proof of soundness, wherever possible, is always desirable, but end-to-end proof can be elusive when analyzing computation in biological systems like brains and evolving populations. We must instead use the scientific method(2), an approach undergirded by the following rule:
hypothesis ==> prediction ≡ ¬prediction ==> ¬hypothesis
Unlike the foundations of, say, physics, the foundations of computer science are logically verifiable; hypotheses play no part. So, while computer scientists have seen engineering revolutions aplenty, they have seen nothing like the transition from a Newtonian universe to an Einsteinian universe or from the phlogiston theory of combustion to Lavoisier's oxygen-based theory or any of the other foundational shifts described in Thomas Kuhn's Structure of Scientific Revolutions. Theoretical physicists, chemists, and biologists trained informally, if not formally, in the application of the scientific method know how to evaluate and work with competing hypotheses. The same cannot be said of theoretical computer scientists today. For them, the scientific method is unfamiliar terrain, with different rules and alternate notions of rigor. For example, assumptions must be weak, and hypotheses testable.
For all computer science as a field has to contribute to the natural sciences, it also has much to learn.
Keki M. Burjorjee
(1) Burjorjee, K.M. Hypomixability elimination in evolutionary systems. In Proceedings of the 13th Foundations of Genetic Algorithms Conference (Aberystwyth, U.K., Jan. 17–20). ACM Press, New York, 2015, 163–175.
(2) Popper, K. The Logic of Scientific Discovery. Routledge, London, U.K., 2007.
While Adi Livnat and Christos Papadimitriou's article (Nov. 2016) provided the rationale for a provocative magazine cover, the article itself began with a false claim and ignored a much simpler explanation for the success of sexual evolution. Shortly after life appeared on Earth, approximately 3.8 billion years ago, evolution began diversifying lifeforms in a very pragmatic way, with mutations that increased the ability of individuals to survive and reproduce being passed along to future generations, whereas those that were disadvantageous were naturally dropped. This process soon discovered that sexual reproduction worked better than simply subdividing, in that it allows advantageous mutations that occur in different families to be combined, allowing evolution to proceed more rapidly, whereas subdividing does not allow it. Sexual reproduction thus became dominant.
Nevertheless, the article said, "What is the role of sex in evolution? Reproduction with recombination is almost ubiquitous in life (even bacteria exchange genetic material), while obligate asexual species appear to be rare evolutionary dead ends. Yet there is no agreement among the experts as to what makes sex so advantageous."
How can there be no agreement when the reason for sexual evolution is so obvious? In order for sexual evolution to work, each generation must die, which some people view as inconvenient, prompting them to imagine an afterlife. Subdividing, on the other hand, produces potential immortals who are naturally less diverse because they mutate less radically than the sexy species.
P.S. I do not hold any of this against Christos Papadimitriou, who I have known for 50 years.
Earnest's idea, first proposed by R.A. Fisher (1930) and H.J. Muller (1932), does not solve the problem and is referenced in our online appendix where the interested reader can begin to explore this fascinating topic. The debate among experts is ongoing, and our recent article contributed a fresh idea to it. Burjorjee did not back up with evidence his claim of empirical success of genetic algorithms, compared to, say, simulated annealing. And a propos philosophy of science, he may refer to Papadimitriou's 1995 article "Database Metatheory: Asking the Big Queries" (http://dl.acm.org/citation.cfm?id=211547) with its sections on T. Kuhn, K.R. Popper, and P. Feyerabend, and their relevance to computer science.
The following letter was published in the Letters to the Editor in the February 2017 CACM (http://cacm.acm.org/magazines/2017/2/212426).
I am writing to express my dismay and disappointment at the cover of the November 2016 issue introducing the article "Sex as an Algorithm: The Theory of Evolution Under the Lens of Computation" by Adi Livnat and Christos Papadimitriou, finding it offensive and attention-grabbing in a way that is inconsistent with ACM's public mission.
While I would guess that most readers either do not care or thought the cover "funny" or "cute," I have talked to enough of my colleagues, who describe their reaction as "shocked," "appalled," "offended," and "embarrassed," to believe it is a serious issue that warrants further reflection.
Specifically, is it really appropriate for ACM, a professional organization that purports to represent and support all its members and all members of the computing discipline, to distribute an issue that some are embarrassed to receive in our mailbox, display on our desks or conference tables, or look at on our computers if somebody might be looking over our shoulders?
First, the research in question is not about sex but about sexual reproduction and its effect on diversity in populations. There is a major difference, and conflating the two in this way comes across as juvenile. I cannot help think of "locker room talk."
Second, placing the huge, bold-faced word "Sex" on a hot pink cover creates an obvious and immediate association with women. Given the under-representation of women in the field, this kind of message is completely counterproductive and particularly reminds young women, who may be less certain about how welcome they are in the field, that they are to be associated with sex, not science.
Third, the unfortunate timing of this issue, which arrived during National Breast Cancer Awareness Month, was undoubtedly unintentional, but to those of us who have lost loved ones to breast cancer, the hot pink cover felt disrespectful and insensitive.
This may not seem like a big deal, and I am sure some readers are thinking I am overly sensitive and humorless. But quite honestly, it is tough enough being a woman in an extremely male-dominated field without feeling embarrassed and awkward about displaying my own professional organization's magazine in public.
In the end, I dropped it into the recycling bin without reading it.
Marie des Jardins
The cover in question, for which I am ultimately responsible, was meant to be humorous. Since several readers were offended by it, it is clear in retrospect the humor was misguided. For that, I sincerely apologize. This has been discussed by the design team, and we hope to learn from this mistake.
Moshe Y. Vardi
The following letter was published in the Letters to the Editor in the March 2017 CACM (http://cacm.acm.org/magazines/2017/3/213824).
Adi Livnat and Christos Papadimitriou review article "Sex as an Algorithm" (Nov. 2016) was fascinating but mistitled. It discussed the benefits of conjugality. George C. Williams in Sex and Evolution distinguished the more general concept conjugality from (eu)sexuality, in which the number of conjugal strains in the species is equal to the number of individuals participating in conjugation — two, in all conjugal species on this planet. This seems an important distinction, and I suggest the cover of Communications was misleading. In my own book Albatross I emphasized this and other distinctions, aiming to avoid nonsensical talk, as in that arising from "the gostak distims the doshes" in The Meaning of Meaning by C.K. Ogden and I.A. Richards.
Livnat's and Papadimitriou's reference to their non-coverage of heterozygosity was revealing. I rather suspect heterozygosity is a prerequisite for sexuality proper; certainly a lot of sexual species are haploid in the gametic generation and diploid in the others.
Some of the mathematics as to the binarity of conjugation might be interesting. What are the chances that on some other world there may have evolved life with a triple helix, ternary conjugation — and so trisexuality?
John A. Wills
Displaying all 3 comments