In November 2024 Communications Vardi’s Insights column, “What Is Theoretical Computer Science?”, Moshe Vardi raised an interesting point of whether theoretical CS is CS or math. The question is: What is math? Most people outside math consider math as a science of studying abstract structures—just like biology is studying living creatures. However, most mathematicians we know define mathematics not by its subject, but by its level of rigor—mathematics (in contrast, for example, to physics) is making conclusions at the highest level of rigor, that is, proving theorems. Some of these theorems are abstract, some are related to applications. When a researcher proves a theorem about Schroedinger equations, and the result is of interest to physics, of course this is physics, but this is also mathematics—since it is a theorem. From this viewpoint, the usual outsider’s “or” definition of mathematics is somewhat demeaning: if your theorem is of interest to physics it is physics, not math, and the only results which are classified this way as mathematics are theorems that are of no interest to anyone outside mathematics itself.
This is not just a subjective viewpoint: for example, papers with CS-related theorems can be (and are) reviewed both in Computing Reviews and in Math Reviews. When we joined University of Texas at El Paso in 1990s, we had two researchers working on nonmonotonic reasoning and logic programming, in particular, proving theorems about it: Teodor Przymusinski in the math department and Halina Przymusinska in the CS department. They were both highly valued by their departments: Teodor since his theorems were useful, and Halina since her results were not just empirical, they were validated by proofs. Like them, in our opinion, Moshe Vardi is both a computer scientist and a mathematician—but, of course, since he himself considers computer science as his main area, “computer scientist” should come first in his Wikipedia profile.
Olga Kosheleva, El Paso, TX, USA
Vladik Kreinovich, El Paso, TX, USA
Moshe Vardi ponders the nature of theoretical computer science (TCS) in his November 2024 Communications column, “What is Theoretical CS?” He contemplates its position under mathematics but then asks, “… what is mathematics?”
Mathematics could be seen as the invention and discovery of patterns within a context. Geometry’s patterns are relationships among points, lines, and angles, built upon Euclid’s five postulates. TCS’s pattern is the nature and limits of computation, independent of any physical computer. Alan Turing laid the groundwork for TCS with his concept of the Turing Machine, leading to the Universal Turing Machine and his proof of the Halting Problem. These concepts define the scope and limitations of computation itself, distinct from any physical system.
Vardi objects to TCS being a branch of mathematics, as it does not explain or predict observable natural phenomena such as theoretical physics. However, TCS addresses a different realm: it predicts the possibilities and boundaries of computation, guiding technological development through abstract principles rather than physical laws. While TCS exists in an abstract realm, its principles drive tangible advances in cryptographic security, artificial intelligence, and quantum computing.
Thus, TCS is a silent engine of our technological world, shaping the digital landscape through principles that predict and constrain future innovation.
Jim Humelsine, Neptune, NJ, USA
Author’s response:
The letter writers got so wrapped up in the fourth paragraph of my November 2024 column that they seem to have stopped reading after it.
But the fifth paragraph starts with “But my objection to ‘TCS is a branch of mathematics’ is deeper than the sociological argument. I believe that thinking of TCS as a branch of mathematics is harmful to the discipline.”
I encourage the letter writers to read rest of the column.
Moshe Y. Vardi, Senior Editor
Communications of the ACM
Houston, TX, USA
Analyzing ‘Meta-Analysis’
There has been unfounded skepticism in the electronic design automation community about whether our AlphaChip method that uses reinforcement learning to do chip placement works as claimed in our 2021 Nature paper (https://bit.ly/4fHmFMc). This is despite our method being used as an increasingly integral part of the chip design process for three generations of TPUs, datacenter CPUs, and other chips across Google and Alphabet. Our open-sourced method has been made available for all to use, and many are doing so (see https://bit.ly/4fEmVeY).
Igor Markov recently published a “meta-analysis” (“Reevaluating Google’s Reinforcement Learning for IC Macro Plaement”) in the November 2024 issue of Communications, without disclosing Markov’s affiliation as a high-level employee at Synopsys, which makes commercial software that competes with our open-source release of AlphaChip (after it was published, Shankar Krishnamoorthy, head of Technology and Product Development at Synopsys, reached out to us regarding the Communications article to say “Igor Markov’s comments and writings do not represent Synopsys views or opinions in any way”). Markov’s meta-analysis looks at two non-peer-reviewed papers and makes baseless allegations of scientific integrity issues, all already found to be without merit by Nature.
One of the two sources “meta-analyzed” by Markov is a deeply flawed non-peer-reviewed publication by Cheng et al. (https://arxiv.org/abs/2302.11014) that claimed to replicate our Nature paper’s approach but failed to follow our methodology in major ways. In particular, the authors did no pre-training (we discussed the benefits of pre-training extensively in our Nature article, mentioning it 37 times, and empirically demonstrated its importance), robbing our learning-based method of its ability to learn from other chip designs, then used 20X less compute and did not train to convergence, preventing our method from fully learning even on the chip design being placed. By analogy, this would be like evaluating a version of AlphaGo that had never seen a game of Go before (instead of being pre-trained on millions of games), and then concluding that AlphaGo is not very good at Go.
The second “meta-analysis” source is an unpublished and anonymous PDF, on which Markov is a shadow co-author (as he confirmed in a now-deleted LinkedIn post), meaning his Communications “meta-analysis” is effectively regurgitating his own unpublished, non-peer-reviewed arguments as if they were independent and validated by the scientific community.
In our view, the Markov Communications article should be relegated to the scrapheap. Thinly veiled fraud allegations already found to be without merit by Nature, based on “meta-analysis” of two flawed, non-peer-reviewed articles and containing no new research contributions are not deserving of being published as a “Research and Advances” article in Communications. We are disappointed in the editorial and peer review failings that allowed this.
Jeff Dean, Google DeepMind, Mountain View, CA, USA
Anna Goldie, Google DeepMind, Mountain View, CA, USA
Azalia Mirhoseini, Google DeepMind, Mountain View, CA, USA
Editor-in-Chief’s response
Communications thanks Jeff Dean, Anna Goldie, and Azalia Mirhoseini for pointing out two minor issues with Igor Markov’s November 2024 Communications article. First, the article did not fully identify Markov’s affiliation when submitted since the work was done before his engagement at Synopsys and had no connection to his employer. The second is that Communications truncates long lists of authors in reference lists to “first author et al.” In this article, reference 5 was published with only the first author’s name, which means that a Communications reader could not find the other names since the referenced paper had no authors list. (The paper’s reviewers saw the complete author list in the reference list in the submitted manuscript.)
Both issues have been corrected in the online version of the article.
We have invited Jeff and co-authors to submit an article to Communications presenting technical arguments against the Markov article. Science has long been advanced by controversy, in which competitors challenge each other’s assumptions, techniques, and results. Open and untrammeled discussion, of which science has many examples, lets the scientific community better understand complex issues and form its own opinions of which side is correct.
James Larus, Editor-in-Chief
Communications of the ACM
Berkeley, CA, USA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment