Sign In

Communications of the ACM


Long Live Incremental Research!

View as: Print Mobile App Share:
Bertrand Meyer

[This article is a slightly updated version of a note I posted almost two years ago on my personal blog,  At the recent Microsoft Software Summit in Paris I gave a short talk based on that note, and so many people told me they enjoyed it that I thought it would be appropriate to share the ideas with the readers of the CACM blog, many of whom are presumably interested in issues of research funding. Please note that the most enthusiastic text extracts from funding agencies appearing below are meant to be read aloud,  with the proper accents of passion.]

The world of research funding has of late been prey to a new craze: paradigm-shift mania. We will only fund ten curly-haired cranky-sounding visionaries in the hope that one of them will invent relativity. The rest of you — bit-players! Petty functionaries! Slaves toiling at incremental research!  — should be ashamed of even asking.

Take this from the US National Science Foundation’s description of funding for Computer Systems Research [1]:

CSR-funded projects will enable significant progress on challenging high-impact problems, as opposed to incremental progress on familiar problems.

 The European Research Council is not to be left behind [2]:

Projects being highly ambitious, pioneering and unconventional

Research proposed for funding to the ERC should aim high, both with regard to the ambition of the envisaged scientific achievements as well as to the creativity and originality of proposed approaches, including unconventional methodologies and investigations at the interface between established disciplines. Proposals should rise to pioneering and far-reaching challenges at the frontiers of the field(s) addressed, and involve new, ground-breaking or unconventional methodologies, whose risky outlook is justified by the possibility of a major breakthrough with an impact beyond a specific research domain/discipline.

Frontiers! Breakthrough! Rise! Aim high! Creativity! Risk! Impact! Pass me the adjective bottle. Ground-breaking! Unconventional! Highly ambitious! Major! Far-reaching! Pioneering! Galileo and Pasteur only please – others need not apply.

As everyone knows including the people who write such calls, this is balderdash. First, 99.97% of all research (precise statistic derived from my own ground-breaking research, funding for its continuation would be welcome) is incremental. Second, when a “breakthrough” does happen — the remaining 0.03%  — it was often not planned as a breakthrough.

Incremental research is a most glorious (I have my own supply of adjectives) mode of doing science. Beginning PhD students can be forgiven for believing the myth of the lone genius who penetrates the secrets of time and space by thinking aloud during long walks with his Italian best friend [3]; we all, at some stage, shared that delightful delusion. But every researcher, presumably including those who go on to lead research agencies,  quickly grows up and learns that it is not how things happen. You read someone else’s solution to a problem, and you improve on it. Any history of science will tell you that for every teenager who from getting hit by a falling apple intuits the structure of the universe there are hundreds of excellent scientists who look at the state of the art and decide they can do a trifle better.

Here is a still recent example, particularly telling because we have the account from the scientist himself. It would not be much of an exaggeration to characterize the entire field of program proving over the past four decades as a long series of variations on Tony Hoare’s 1969 Axiomatic Semantics paper [4]. Here Hoare’s recollection, from his Turing Award lecture [5]:

In October 1968, as I unpacked my papers in my new home in Belfast, I came across an obscure preprint of an article by Bob Floyd entitled “Assigning Meanings to Programs.” What a stroke of luck! At last I could see a way to achieve my hopes for my research. Thus I wrote my first paper on the axiomatic approach to computer programming, published in the Communications of the ACM in October 1969.

Had the research been submitted for funding, we can imagine the reaction: “Dear Sir, as you yourself admit, Floyd has had the basic idea [6] and you are just trying to present the result better. This is incremental research; we are in the paradigm-shift business.” And yet if Floyd had the core concepts right it is Hoare’s paper that reworked and extended them into a form that makes practical semantic specifications and proofs possible. Incremental research at its best.

The people in charge of research programs at the NSF and ERC are themselves scientists and know all this. How come they publish such absurd pronouncements? There are two reasons. One is the typical academic’s fascination with industry and its models. Having heard that venture capitalists routinely fund ten projects and expect one to succeed, they want to transpose that model to science funding; hence the emphasis on “risk”. But the transposition is doubtful because venture capitalists assess their wards all the time and, as soon as they decide a venture is not going to break out, they cut the funding overnight, often causing the company to go under. This does not happen in the world of science: most projects, and certainly any project that is supposed to break new ground, gets funded for a minimum of three to five years. If the project peters out, the purse-holder will only find out after spending all the money.

The second reason is a sincere desire to avoid mediocrity. Here we can sympathize with the funding executives: they have seen too many “here is my epsilon addition to the latest buzzword” proposals. The last time I was at ECOOP, in 2005, it seemed every paper was about bringing some little twist to aspect-oriented programming. This kind of research benefits no one and it is understandable that the research funders want people to innovate. But telling submitters that every project has to be epochal (surprisingly, “epochal” is missing from the adjectives in the descriptions above  — I am sure this will soon be corrected) will not achieve this result.

It achieves something else, good neither for research nor for research funding: promise inflation. Being told that they have to be Darwin or nothing, researchers learn the game and promise the moon; they also get the part about “risk” and emphasize how uncertain the whole thing is and how high the likelihood it will fail. (Indeed, since — if it works — it will let cars run from water seamlessly extracted from the ambient air, and with the excedent produce afternoon tea.)

By itself this is mostly entertainment, as no one believes the hyped promises. The real harm, however, is to honest scientists who work in the normal way, proposing to bring an important contribution to the  solution of an important problem. They risk being dismissed as small-timers with no vision.

Some funding agencies have kept their heads cool. How refreshing, after the above quotes, to read the general description of funding by the Swiss National Science Foundation [7]:

The central criteria for evaluation are the scientific quality, originality and project methodology as well as qualifications and track record of the applicants. Grants are awarded on a competitive basis.

In a few words, it says all there is to say. Quality, originality, methodology, and track record. Will the research be “ground-breaking” or “incremental”? We'll find out when it's done.

I am convinced that the other agencies will come to their senses and stop the paradigm-shift nonsense. One reason for hope is in the very excesses of the currently fashionable style. The above short text from the European Research Council includes, by my count, nineteen ways of saying that proposals must be daring. Now we do not need to be experts in  structural text analysis to know that someone who finds it necessary to state the same idea nineteen times in a single paragraph feels fundamentally insecure about it. He is, in truth, trying to convince himself. At some point the people in charge will realize that such hype does not breed breakthroughs; it breeds more hype.

In the meantime, what should we do? Most of us need funding. Also, there is nothing wrong with a little hype. After all, if you are of the shy and unassuming type, not convinced that you are smarter than everyone else, you do not become a researcher (but pick some profession where being more modest is OK, maybe politician).  Being too demure will hurt you. (I still remember a project proposal, many years ago, which came back with glowing reviews: the topic was important, the ideas right, the team competent. The agency officer’s verdict: reject. The proposers are certain to succeed, so it’s not research.) There is, however, a line not to be crossed. Highlighting and extrapolating the benefits of your proposed research is OK; making absurd representations is not.

So: one cheer for incremental research.

Wait, isn’t the phrase supposed to be “two cheers” [8]?

All right, but let’s go at it incrementally. One and one-tenth cheer for incremental research. 


[1]  National Science Foundation, Division of Computer and Network Systems: Computer Systems Research  (CSR), at

[2] European Research Council: Advanced Investigators Grant, at

[3] The Berne years; see any biography of Albert Einstein.

[4] C.A.R. Hoare: An axiomatic basis for computer programming, in Communications of the ACM, vol. 12, no 10, pages 576–580,583, October 1969. A Retrospective on this historic paper appeared in the October 2009 issue of Communications (vol. 52, no. 9, pages 30-32), available here.

[5] C.A.R. Hoare: The Emperor’s Old Clothes, in Communications of the ACM, vol. 24, no.  2, pages 75 – 83, February 1981. 

[6] Robert W. Floyd: Assigning meanings to programs, in Proceedings of the American Mathematical Society Symposia on Applied Mathematics, vol. 19, pp. 19–31, 1967.

[7] Swiss National Science Foundation:  Projects – Investigator-Driven Research, at Disclosure: The SNSF has kindly funded several of my research projects over the past years.

[8] E.M. Forster: Two Cheers for Democracy, Edward Arnold, 1951.



Im sorry but IMHO you confuse computer science with science. Natural science investigates the rules that govern the natural world while social science studies humans and their behaviour. Computer science, however, is all about invention. We invent architectures, programming languages, build systems, build systems to build better systems, But do we acquire knowledge that explains the world or allows predictions about the world? No! Thus, computer science has no raison d'tre per se.

IMO computer science in academia is engineering with belt and braces. Engineering conventional stuff and incrementally advancing the state of the art is something that industry can often do very well. Doing risky things, very basic research, project that likely fail are seldom done by companies. This gap should be filled by publicly funded research. I, however, almost agree with you: Long Live Incremental Research in Natural and Social Science!


I am writing this comment in reference to the comment posted on June 17, 2011, stating that computer science should not be confused with science.

The pursuit of understanding of the computational theory of cognition and behavior is well within the realms of computer science and is also intrinsically related to pure mathematics, biochemistry and even particle physics at a certain level. This is just one example that comes to mind when trying to think of computer science as a science and not just engineering, I am sure there are others.


Anyone who thinks that computer science is not natural science has never heard of theoretical computer science or mathematics, which is at the heart of computer science.


To the commentator as of June 17, 2001:

Even if it is true that some areas of computer science are more like engineering ... what is your point?
Are you saying that none of the engineering "sciences" (e.g., computer engineering, mechanical engineering, electrical engineering) should get any funding?

Besides: industry never does risky or long-term engineering research.



Despite the rhetoric, a vast majority of the research that is funded by NSF is good, solid, incremental science. However, when because of budget constraints, the success rates in the 10-15% range, you might want to give NSF the benefit of doubt when research that is judged by peer review to be breaking new ground gets a little higher priority for funding as compared to work that the review panel (consisting of scientists such as yourself) regard as making minor advance on a topic that has otherwise been beaten to death . If you would like to see more high quality research to be funded, you better make the case to the society and its elected representatives who control the NSF budget that such research is worth funding. NSF is overseen by the National Science Board, which is made of distinguished scientists like yourself. :-)


Computer science is the science of algorithmic processes that underly information processing, regardless of whether such information processing occurs in computers built out of silicon chips, living cells, brains, economies, societies or cultures. It is a science on par with physical, biological, and social sciences. The science of computing should not be confused with the engineering of artifacts that are designed to support computing or artifacts that are enabled by computing.


Funny this. Some of the most dramatic and most of the incremental advances in computer-related fields have been done BY INDUSTRY, and that includes areas that relate to hard sciences. Medical devices abound for example. Benoit Mandelbrot was working for IBM when he developed fractal geometry that has been used massively in geographical resource exploration and computer graphics. Practical (industry-sourced) science funding has advanced science and engineering by increments in microchips and in communications technology.

Computing and biology become suffused in genetics. There is research that relates to computing breakthroughs that can be learned from the best digital/analog processing mechanisms in Creation, with neurological networks and from the DNA digital symbolic language codign machinery.


According to the The Innovator's Dilemma by Clayton Christensen, a paradigm shift in technology (disruptive innovation) is a technology or innovation that has the ability to replace or marginalize an existing technology (paradigm).
Some examples are light bulbs vs. candlelight; automobiles vs. horse carriage; digital vs analog film technology, internet vs. previous information communication media. All of those technological paradigm shifts were preceded by scientific paradigm shifts.

A paradigm shift is not a result of a few persons work during a few years. It is indeed a work (incremental, BUT TOWARDS A NEW PARADIGM!) of hundreds, thousands and even millions of people from setting up the conceptual framework to elaborating it, to learning how to use it.
From Maxwell and all physicists before him and those after him working on practical applications to people developing and building power stations and the entirely new infrastructure - to the millions of users of the new technology.

To work on improving electric bulb makes sense until it starts to be evident that the discovery of electricity has a potential to provide a radically better solution.

No progress of humanity is possible without incremental research but it is best when it happens within the best available paradigm.

Today there is a new paradigm in computing emerging, see Peter Dennings "Computing is a natural science"
To ignore this paradigm shift would be to prefer candlelight to electric bulb.

And there are pradigm shifts on the way today in a variety of research fields, almost all of them closely related to computing!


I recently had an experience with ERC grant application (the one mentioned in the article). Among other things, I find a statement which can be summarized as "the problem is open and interesting, but clearly not innovative as it aims at combining results and approaches by two of the major mathematicians of early XXth century; so it cannot be funded".
Had it been said that if they did not solve the problem I could not make it, nothing to object.
But saying that this cannot be funded because new things are apriori more relevant than problems which have been opened for a long time and which interested such great characters, this raises some question about the criteria for funding, IMHO.

View More Comments