Computing Applications BLOG@CACM

When Reviews Do More Than Sting

The Communications Web site,, features more than a dozen bloggers in the BLOG@CACM community. In each issue of Communications, we'll publish selected posts or excerpts.

Follow us on Twitter at

Bertrand Meyer wonders why malicious reviews run rampant in computer science.
  1. Bertrand Meyer "The Nastiness Problem in Computer Science"
  2. Readers' Comments
  3. Author
Bertrand Meyer
August 22, 2011

Are we malevolent grumps? Nothing personal, but as a community, computer scientists sometimes seem to succumb to negativism. They admit it themselves. A common complaint in the profession is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone’s chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view ("Circling the wagons and shooting inward," as Greg Andrews put it in a recent discussion). Is it accurate?

One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.

More examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous "accept" from its referees, and only four had an average accept recommendation. As he writes, "either we all suck or something is broken!"

Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had "good" proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterward several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward. We come across such cases so often—the research proposal evaluation that gratuitously but lethally states that you have "less than a 10% chance" of reaching your goals, the killer argument "I didn’t hear anything that surprised me" after a candidate’s talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The "surprise" comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)

A particular source of evidence is refereeing, as in the SIGMOD example. I keep wondering at the sheer nastiness of referees in CS venues.

We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%–20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.

Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.

The particular combination of incompetence and arrogance that characterizes much of what Naughton calls "bad refereeing" always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software). The IEEE rejection was on the basis of such review gems as:

  • I think time will show that inheritance (section 1.5.3) is a terrible idea.
  • Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

One of the reviewers also wrote: "But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others."

This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsos custodes?

"More established disciplines." Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, "time for computer science to grow up." After some 60 or 70 years we are not so young any more.

Rejecting a paper is not the same thing as insulting the author under the convenient cover of anonymity.

What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?

Back to Top

Readers’ Comments

This is only a problem for academics. In the real world (industry), the customers stand in judgment.

I am a physicist but have entered CS and now publish in this field. I do notice the attitudes you describe and they scare me because I get the impression that every other computer scientist is very insecure. Rude comments from reviewers are common and editors seem not to care. But more so, it is common that reviewers are clueless and barely understand the paper they review. So, if one reviewer is rude and clueless, and two are knowledgeable and positive, then the editor still mainly listens to the clueless one, simply because a negative critique is more worthy than positive in this field…

As a reviewer and as an author, I get the feeling (in some cases I actually know) that some of my (co)reviewers did one of two things: Had someone else less/not qualified review the paper, without bothering to check the quality of the review; or reviewed the paper at the last possible minute, probably after several reminders from the program chair.

In either case, it is hard to get a fair review.

I am from physics, it has its own share of nastiness, different from what you describe. Right now, I work in a research organization dominated by computer scientists and have written and reviewed some computer science papers. At the risk of sounding haughty (I do not mean to), I would say:

As you have mentioned, computer science is a relatively new field and physics far more mature. This not only means that computer science has more upstarts reviewing and writing papers, but that the quality of research varies from excellent to mediocre to rather poor. As opposed to physics or natural sciences, where almost all research in a field is of similar quality (with respect to maturity). Now you might say that is good or not good, I don’t know.

Also, computer scientists have far more funds to publish and hold conferences (at exotic locations), leading in turn to lots more papers to write and review, and all the related rage. I wrote one paper in two years and reviewed maybe a couple every year while I was in physics. I do some reviewing/writing activity every week in computer science research.

One challenge this poses to program chairs is that we can be misled by nasty reviews versus genuine rejections, especially when the nasty guy works hard. A solution might be to publish reviewer stats, including how often a reviewer is the minority, average length of review, and so on. It might improve behavior if we know that poor numbers could brand us out of prestigious committees. We all want to be on program committees, and then act as if we can’t be bothered and are too busy! This needs to change. Coming from industry, I know that sticks work better than carrots!

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More