Opinion
Data and Information Departments

What Came First, Math or Computing?

One of the most fundamental conundrums in the philosophy of mathematics is the question of whether mathematics was discovered by humans or invented by them. 

Posted
CACM Senior Editor Moshe Y. Vardi

One of the most fundamental conundrums in the philosophy of mathematics is the question of whether mathematics was discovered by humans or invented by them. On one hand, it seems hard to argue that highly sophisticated mathematical objects, such as inaccessible cardinals, were discovered. On the other hand, as Albert Einstein asked, “How can it be that mathematics, being after all a product of human thought, which is independent of experience, is so admirably appropriate to the objects of reality?” The 19th century mathematician Leopold Kronecker offered a compromise, saying “God created the integers, all else is the work of man.”

So let us consider the natural numbers. The Lebombo Bone is a bone tool made of a baboon fibula with incised markings, discovered in a cave in the Lebombo Mountains in Africa. More than 40,000 years old, the bone is conjectured to be a tally stick, its 29 notches counting, perhaps, the days of the lunar phase. It has referred to as the oldest mathematical artifact. But should we not call it the oldest computing artifact? Counting is, after all, the most basic form of computing.

Deductive mathematics was discovered by the Greeks about 2,500 years ago, a discovery that has been called “The Greek Mystery.” Pythagoras considered the proof of his theorem a “gift from the Gods.” A deductive proof offers indisputable evidence—the high road to truth.

The Greeks also discovered the Liar’s Paradox, where self-reference creates a statement whose truth or falsity is elusive. In the 19th century, Georg Cantor used self-reference to prove that there are infinitely many distinct infinities—a result that Kronecker dismissed with “There is no mathematics there.” Bertrand Russel then used self-reference to show that set theory, considered the foundational theory of mathematics, is inconsistent, launching the so-called Foundational Crisis. A mathematical proof provides indisputable evidence of mathematical truth, but what constitutes a proof?

In response to the crisis, David Hilbert, the reigning mathematician during the first part of the 20th century, launched Hilbert’s Program, which consisted of three legs. Hilbert aimed to show that mathematics is consistent (a mathematical statement and its negation cannot ever both be proved), mathematics is complete (all true mathematical statements can be proved), and mathematics is decidable (there is a mechanical way to decide whether a given mathematical statement is true or false).

In the 1930s, Kurt Gödel demolished the first two legs of Hilbert’s Program, showing that arithmetic is not complete and its consistency cannot be proven in arithmetic. Shortly after that, Alonzo Church and Alan Turing demolished the third leg. They defined computability and showed that mathematical truth is not computable. This result could be understood to say that mathematics transcends computation.

Computer science, nevertheless, was born out of the ruins of Hilbert’s Program: we got the notion of computability, the distinction between hardware and software, and the concept of a universal machine. In an amazing historical confluence, real computers were soon built: the Z3 by Zuse in 1941, the Atanasoff-Berry Computer (ABC) in 1942, and the ENIAC—the first digital, electronic, programmable computer—in 1946.

As the use of computing in science and business spread in the 1950s and 1960s, we soon discovered that being computable is not enough. Solving certain computational problems seems to require inordinate amounts of computational resources (time and memory). Certain problems seem to be amenable only to exhaustive search, which becomes impractical as problem instances grow. Computational complexity theory was developed to understand this phenomenon.

NP-completeness theory, which emerged in the early 1970s, aimed at explaining the difficulty of exhaustive search. Problems in NP are problems that have short solutions that can be checked efficiently. NP-complete problems are the hardest problems, in a formal sense, in NP. Boolean satisfiability, the very crux of deductive reasoning, was shown by Stephen Cook and Leonid Levin to be NP-complete. We still do not know, however, if NP-complete problems are truly intractable.

In 1979, building on NP-completeness theory, Cook and Robert Reckhow were finally able to answer the fundamental question of what a mathematical proof is, evidence that is so rigorous it can be checked computationally.

Mathematics does not transcend computation after all. Rather, computation is at the very heart of mathematics.

So, what came first, math or computing? Neither! They were both developed by humans as a way to reason about the physical world. Math and computing have been entwined for the past 40,000 years and will continue to be so.

Back to Top

 

Join the Discussion (1)

Become a Member or Sign In to Post a Comment

  1. Dear Moshe,

    An important thread missing from your piece is that mathematics would be difficult without the invention of the clay tablet about 3500 BCE. While we can perform simple arithmetic in our heads, it is writing which sustained the logistics of the first city: Sumer in Mesopotamia. Tablets developed into paper, the printing press and other machines which can use the printed word.

    But were there no cultures before writing? Presumably, the example of a 40,000 year old tally stick shows there were. What we need to consider: is thought a Universal Machine?

    For thought to be Turing complete it would need to support a set of steps, have the ability to create loops in those steps, and the conditionally to operate or skip those steps.

    You only need to look at your grandma’s recipe for apple pie to show that it is broken down into a set of steps.

    And, if you can say, “if so, …”, or “if not, …”, it has conditionality.

    Further, if you can say, “Do X with the first of these”, and, “Do X with the rest of these”, and you are processing a list: creating a loop.

    Therefore, thought IS Turing Complete. But this is the thing, this is self-evident; by successfully conveying any idea, we are ‘programming’ each other’s minds: speech is computational.

    Can we agree that thought precedes math?

    Martin
    martin@wheatman.net

    P.S. a demonstration of how speech is Universal can be found in the software at bitbucket.org/martinwheatman/enguage

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More