In a recent social-media posting I quoted a blog entry by Michael Mitzenmacher, titled "Easy Now," which opened with the sentence "In the past year or so, I've gotten reviews back on multiple papers with the complaint that the result was too simple." He went on to assert: "From my standpoint, easy is a plus, not a minus." Both the original blog entry and my own posting were heavily commented on, with the general sentiment strongly sympathizing with Mitzenmacher. This unhappiness with the current state of computing-research conferences seems to reflect the general mood in the community, as has been discussed on these pages over the past few years.
A three-day Perspective Workshop on the subject of "Publication Culture in Computing Research" was held at Schloss Dagstuhl in November 2012 (for details, see http://bit.ly/1c9jxAS). A key motivation for the workshop was the observation that in spite of the pervasive dissatisfaction with the status quo, "the community seems no closer to an agreement whether a change has to take place and how to effect such a change." I would have liked to report that we reached agreement that change must take place and we figured out how to effect such a change. Unfortunately, we did not. We did, however, reach agreement on many issues.
One of the main insights developed at the workshop was the computing-research publishing ecosystemboth conferences and journalshas simply failed to scale up with the growth of the field. Consider the following numbers. Between 2002 and 2012, Ph.D. production in computer science and engineering in North America doubled, roughly from 800 to 1,600 (numbers for other parts of the world are not available, regrettably). The number of conference papers published by ACM also roughly doubled, from 6,000 to 12,000. How did we respond to this growth in research production? Simple; instead of doubling the size of our conferences we doubled the number of conferences. The number of ACM conferences during this period grew from about 80 to almost 160!
We are all aware of the adverse effects of "conference inflation." Instead of serving as community-building events, many conferences have become paper-publishing events, the infamous "journals that meet in hotels." Matching papers and conferences has become more difficult, as reviewers struggle to find reasons to reject papers, such as "the result is too simple." Papers bounce from conference to conference, creating an ever-increasing review workload. It is not uncommon to hear of a paper being rejected summarily from one conference only to receive a best-paper award from another conference.
I find this failure to scale extremely ironic considering how much our discipline is about scaling: higher complexity, larger volumes of data, and larger problems. We have built the Internet, which is about to go interplanetary, but we have failed to scale our own institutions. Considered from that perspective, one path forward in the publication-culture debate is to note the growth of the field and resolve to grow our conferences rather than to continue proliferating them. Imagine SIGPLAN, for example, having, say, two large biannual meetings, rather than the 14 conferences SIGPLAN sponsors now.
A bold proposal along these lines is expressed in the Viewpoint "Publish Now, Judge Later" by Doug Terry on page 44 of this issue. Terry starts with the observation that computing-research conferences today face a reviewing crisis with too many submissions and not enough time for reviewers to carefully evaluate each one. The result is the process, meant to identify the papers of the "highest quality," is itself of questionable quality. In fact, there is evidence that while reviewers may reach consensus on the small fractions of the strongest submissions and the weakest submissions, there is no consensus on the main bulk of the submissions, and the final accept/reject decisions are essentially random.
Terry, therefore, proposes an approach where conferences accept any paper that extends the current body of knowledge, as it is extremely difficult to judge the true significance of any new research result. In this approach, a conference publication is not the final publication of a research result, but its first publication. Through discussions and follow-on journal publication, the community will eventually reach judgment on the significance of the result.
The change from "reject as default" to "accept as default" would be a significant change to our publication culture. I do not expect to see such a change be adopted quickly or widely. It would be nice, however, to see one computing-research subcommunity be brave enough to experiment with it. To quote a Chinese proverb, "A journey of a thousand miles begins with a single step."
Moshe Y. Vardi, EDITOR-IN-CHIEF
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.
I had a series of "bad" review experience last year and was about to make my new year's resolution never to volunteer for reviews again (most likely a career damaging decision for a junior academic). Thanks to Moshe and Doug for bringing the "review review" again to the forefront of discussion. Over the years we have made several tweaks to the process (meta review, rebuttal etc.) but it is probably a time for complete makeover. Here are some of my quick thoughts that are along the line of proposed "accept as default" paradigm.
a) If a review is strictly an intellectual and scientific reaction to a research finding, then I do not see any reason why the reviews and reviewer information should be confidential. In fact the dialogue that takes place among the reviewers, meta reviewers and the authors can be a important part of the overall academic discourse. Making the whole process transparent IMHO will automatically improve the quality of the reviews. I hypothesize in such a model there will be a steady decline of reviews that say "not enough experiments." and reject the paper.
b) The nature of human discourse has changed in unprecedented ways. For example, ten years ago, it would not have been possible for me (a junior faculty) to comment/participate/butt in on a conversation between two CS heavyweights. It is time for the process of evaluation of scientific discourse to catch up. To do so we should rethink the review process to take advantage of the technologies we have. For example we should try to move towards a process that captures and disseminates the review activity close to real time. Easychair, while providing a convenient, online way to conduct reviews, is essentially a tool that merely automates the traditional review process.
So many publications are minor extension of known research results or state of the art with excellent/detailed evaluation. Such publications routinely win out against papers who deal with novel or risky topics or present fresh approaches but are inherently hard to evaluate (eg Precision-Recall do not do justice, and large scale user evaluations for qualitative criteria are hard to do and often fail to impress reviewers). One option is to create a forum where experienced researchers/practitioners do the review rather than rely on graduate students, with the hope that they can better judge significance of novely of a research.
The time is ripe to the authors also evaluate the reviewers in a more democratic way of publishing innovative stuff. Innovation is so important to the advancement of the science that cannot be put just in the hands of journal reviewers.
Displaying all 3 comments