BLOG@CACM
# The First Heidelberg Laureate Forum 2013

I feel seriously lucky to be at the Heidelberg Laureate Forum (HLF) 2013, where 200 young researchers get to meet with and learn from 40 Turing Award, Fields Medal, Abel Prize and Nevanlinna Prize winners for a whole week from September 22-27, 2013. It is not everyday that 40 of the best minds in mathematics and computer science congregate at one place, and give talks not only about deep technical stuff, the future of computing and mathematics, but also lessons on "how to do research."

It is truly a humbling experience to meet all the laureates and listen to them speak. What struck me most was how approachable all of them were.

The forum aims to inspire young researchers through their interactions with the 40 laureates who have assembled at the beautiful and charming campus of Heidelberg University. And it certainly is inspiring. In many ways, this forum is modeled after the Landau Nobel Laureate meetings that connect Nobel Prize winners with young researchers.

__Sunday September 22, 2013: Pre-event Introductions of Laureates, Welcome and Musical Orchestra__:

The HLF event started on Sunday September 22, 2013 with formal introductions of the laureates, followed by informal mingling between the young researchers and the laureates, and finally ended in a musical play which provocatively posed the question "Can computers ever become good at tasks like producing music that seem to require taste, creativity and emotion?" The entire play was in German with an orchestra in the background playing musical pieces composed by Mozart. There were two protagonists, a man and a woman. The man argued that computers are incapable of creating sublime music like Mozart, while the woman forcefully argued in favor of computers being able to create great music. To prove her point, she demonstrated a program, which stitched together random snippets of Mozart’s music selected by the audience that sounded like, well, Mozart. She also subjected the male protagonist to a "Turing test", where the question posed to him was "Following are two pieces of music played by an expert pianist. Tell us which one was written by Mozart and which generated by the computer?" Apparently, both were computer generated.

To my untrained ear, all the computer-generated music sounded as good any composer I have heard.

**Monday September 23, 2013: First Day of Lectures (Computer Science)****:**

**Raj Reddy’s Talk on "Who Invented the Computer?"**

Professor Reddy (Turing Award) from CMU gave a fascinating account of the history of computing from Leibnitz to Turing. He gave a very comprehensive multi-dimensional criterion for deciding who invented the computer "as we know it today." The criteria included dimensions like "Did person X design a general-purpose programmable design for the first time," "Did their design include the idea of stored programs," "Did it have jump and conditional branching," and "Did the inventor actually manage to build a prototype" (Babbage gets eliminated on this one).

The finalists were Turing, Von Neumann, Zuse, Babbage and Atanasoff. The final winner was Turing, whose ACE computer design checked all the boxes (general-purpose, stored program, iterations and branching, RISC architecture, and an actual prototype). John Von Neumann was a close second with his EDVAC prototype.

**Curtis T. McMullen’s talk on "Billiards in moduli space" (Math)**

Professor McMullen (Fields Medalist) from Harvard gave a talk about predicting how billiard balls on a frictionless surface would bounce off (forever) to produce very interesting patterns as the billiard tables take on weird shapes. Since I am not an expert in this area of mathematics there is not much I can say here. However, what I found fascinating was his use of computer programs to find unusual structures and patterns. This is an instance of computer programs increasing but not widely known usage in mathematics for finding patterns, counter-exampling conjectures, and proving theorems.

**Steve Smale’s talk on the Protein-folding Problem (Math)**

Steve Smale (Fields Medalist) from the City University of Hong Kong and Berkeley gave an account of the math behind the protein-folding problem and how his group has developed award-winning algorithms that can predict the folding of proteins. The problem is "Starting from a sequence of letters representing the amino acids in a protein, can you predict the final 3D structure of the protein". The math required to model this problem is probably not that complicated, but coming up with efficient algorithms and their implementation is highly non-trivial.

**Ed Clarke****’s talk on Model-checking and the Curse of Dimensionality (Computer Science)**

Ed Clarke (Turing Award) from CMU gave a fantastic talk on the general problem of verifying the correctness of computer programs against well-defined specifications, and more to the point about model checking. This topic is close to my heart since I am in formal methods, a field dedicated to semi-automatically verifying the correctness of programs against logic specification through model checkers, theorem provers, type checkers and programming languages based verification approaches.

Model checking (invented by Clarke, Emerson, Sifakis and Quille) is a technique to check whether programs represented as state-transition systems adhere to certain kinds of properties. Ed described the core ideas that made model checking a success: 1) Symbolic model-checking, 2) Bounded model-checking, 3) Counter-example guided Abstraction Refinement (CEGAR), and 4) efficient SAT and SMT solvers.

Given that I work on SAT/SMT solvers, it was heartening to see the recognition of their impact on formal methods and software engineering in general.

A question that repeatedly came throughout the many talks, informal discussions and panel was "can computers do real math," a question is directly addressed in the context of formal methods of verifying program correctness (I discuss this question later on).

**Leslie Valiant****’s talk on Learning as the Source of Life Phenomena**

Leslie Valiant (Turing Award) from Harvard gave probably the most interesting talk of the first day on how computational learning theory can be applied to "explain evolution". The problem that he addresses is that if you simply assume a Darwinian model of random mutations and the survival of the fittest, it doesn’t explain how evolution occurred in a "mere" 4 billion years (the Universe itself is believed to be around 13.7 billion years old). According to Valiant, Darwin’s theory is only an outline and not a theory in the sense of a theory of physics with a precise mathematical description and predictive power.

In order to cast something as a machine-learning problem one has to have a target function. Valiant proposes that the target function in this case is an "ideal function" that indicates the optimal behavior (output of a the circuit being learnt) in any possible environmental condition. The question of how quickly life evolved can be now analyzed using complexity-theory, and we can ask whether a target function representing the "survival of the fittest" can be "learnt easily."

**Panel Discussion on Day 1: Shafi Goldwasser, Leslie Valiant, Ed Feigenbaum, Alan Kay, Butler Lampson, and John Hopcroft (all computer science laureates)**

The panel discussion was inspiring, touching on questions/topics such as "how to do research" (Ed Feigenbaum) to "can computers be used to construct mathematical proofs" (Lampson and Shafi) to the "evolution and acceptance of provable cryptography by both engineers and mathematicians" (Shafi).

To me the highlight was about "computer-aided" mathematical proof, a topic that seems to come up again and again. Shafi pointed out that there were three issues in this context: 1) Can a computer automatically come up with the conjectures/theorems to prove? 2) Can it find the proofs of such conjectures efficiently? and 3) Can such proofs be checked efficiently?

There was no clear agreement on whether an all-powerful theorem provers for mathematics was around the corner. The panelists did however acknowledge the role computers are already playing in mathematical research from the use of programs to find interesting structures in geometry and number theory to counter-exampling of conjectures. Some great success stories in computer-aided proof were also highlighted, e.g., the proof of the 4-color theorem. Another highlight was the mention of the computer-aided verification of the proof of classification of simple groups by researchers at Microsoft Research, Cambridge, UK.

I also found the characterization of Butler Lampson of software systems as "precise" and "approximate" as very appealing. According to Lampson, software systems that have an exacting specification can be considered as precise (e.g., air-traffic control), whereas most software is approximate in that the requirements are not the same for different people (e.g., search engine). This informs and differentiates scenarios when one needs heavy-duty software engineering approaches like formal methods to make sure that software complies with a precise specification, and other situations, as in approximate software, for which lots of testing should suffice (although security is an issue that affects precise and approximate software equally, and remains a tough problem no matter what approach one uses).

*Vijay Ganesh** is a an Assistant Professor in the Electrical and Computer Engineering department at the University of Waterloo, Canada.*

No entries found