Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism.
Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism. They admit it themselves. A common complaint in the profession is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone's chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view ("Circling the wagons and shooting inward," as Greg Andrews put it in a recent discussion). Is it accurate?
One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.
More such examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see his Powerpoint slides). Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous “accept” from its referees, and only four had an average accept recommendation. As he writes, "either we all suck or something is broken!"
Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had "good" proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterwards several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward successful initiatives and promote the discipline.
We come across such cases so often—the research proposal evaluation that gratuitously but lethally states that you have "less than a 10% chance" of reaching your goals, the killer argument "I didn't hear anything that surprised me" after a candidate's talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The "surprise" comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)
A particular source of evidence is refereeing, as in the SIGMOD example. I keep wondering at the sheer nastiness of referees in CS venues.
We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%-20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence that these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.
Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.
The particular combination of incompetence and arrogance that characterizes much of what Naughton calls "bad refereeing" always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory, collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software, 8, 1988, pp. 199-246). The IEEE rejection was on the basis of such review gems as:
One of the reviewers also wrote: "But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others." This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsos custodes?
"More established disciplines": Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, "time for computer science to grow up." After some 60 or 70 years we are not so young any more.
What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?
My personal favorite is the rejection letter that Ben Shneiderman received in 1972 from Communications of the ACM: "I feel that the best thing the authors could do is collect all copies of this technical report and burn them, before anybody reads them." http://www.cs.umd.edu/hcil/members/bshneiderman/nsd/rejection_letter.html
This is only a problem for academics. In the real world (industry), the customers stand in judgement.
This is an academic problem only faced by academics.
The more precious excerpts from refereeing that I've received today (about the paper at http://www.andrebarbosa.eti.br/P_different_RP_Proof_Eng.pdf):
"This is one of those proofs that is "not even wrong", quoting Pauli."
"I cannot follow the paper. It makes no sense."
I am a physicist but have entered CS and now publish in this field. I do notice the attitudes you describe and they scare me because I get the impression that every other computer scientist is very unsecure. Rude comments from reviewers are common and editors seem not to care. But more than so, it is common that reviewers are clueless and barely understand the paper they review. So, if one reviewer is rude and clueless, and two are knowledgable and positive, then the editor still mainly listens to the clueless one, simply because negative critique is more worth than positive in this field...
I have encountered the nastiness phenomenon in reviews. This can be frustrating, especially when malice is coupled with what seems a deliberate misunderstanding, i.e., a malicious interpretation of what is being said or done.
Case in point: In a manuscript we defined a precondition for set used in a certain method. Let us call this P(S). The existing method had a different precondition Q(S). We showed that Q(S) implies P(S), but not vice versa, and that P(S) is still sufficient for S to be useful. The review -- and I was assured by the editor they were an ironclad expert on the very topic -- was a blunt rejection saying (more or less) "the original condition is not broken, it should not be weakened".
It seems to me that malice is in the computer science community taken as a sign of expertice. Perhaps because in the past, some experts have been known to be malicious, some of us try to imitate them in the hope of building a reputation for themselves.
I wonder if our publication model, i.e. favoring conferences over journals, has something to do with this... Conference reviewing is inherently more adversarial and lacks the aspect of collaboration between the editor and author to create a worthwhile publication that seems important for journals (at least the ones I've dealt with).
I suppose it could also be other way around...Maybe we prefer the conference model because of our negativity?
As a reviewer and as an author, I get the feeling (in some cases I actually know) that some of my (co)reviewers did one of two things:
1 had someone else less/not qualified review the paper, without bothering to check the quality of the review,
2. reviewed the paper in the last possible minute, probably after several reminders from the Program Chair.
In either case, it is hard to get a fair review.
This is a great article - thank you for airing this issue. I recently sat through two talks by candidates for a professiorial post. I was frankly embarrassed by the agression in my colleagues' questions. The agressive questions came from exisiting professors, almost as if they were unwilling to have anyone join their exalted ranks.
I am from physics, it has its own share of nastiness, different from what you describe. Right now, I work in a research organization dominated by computer scientists and have hence, written and reviewed some computer science papers. At the risk of sounding haughty (I do not mean to), I would say:
As you've said, computer science is a relatively new field and physics far more mature. This not only means that computer science has more upstarts reviewing and writing papers, but that the quality of research varies from excellent to mediocre to rather poor. As opposed to physics or natural sciences, where almost all research in a field is of similar quality (with respect to maturity). Now you might say that is good or not good, I don't know.
Also, computer scientists have far more funds to publish and hold conferences (at exotic locations), leading in turn to lots more papers to write and review, and all the related rage. I wrote a paper once in two years and reviewed maybe a couple every year while I was in physics. I do some reviewing/writing activity every week in computer science research.
Any field which is still adolescent is bound to have its rage problems, so I do not consider it surprising that computer science research is the way you describe (some of it in conformity with what I've observed). Maybe there is too much money in it for it's own good. I don't know. Just saying.
One challenge this poses to program chairs is that we can be misled by nasty reviews vs genuine rejections, especially when the nasty guy works hard. A solution might be to publish reviewer stats, including how often a reviewer is the minority, average length of review, etc. Might improve behavior if we know that poor numbers could brand us out of prestigious committes. We all want to be on PCs, and then act as if we can't be bothered and are too busy! This needs to change. Coming from industry, I know that sticks work better than carrots!