Unlike most other academic fields, refereed conferences in computer science are generally the most prestigious publication venues. Some people have argued computer science should "grow up" and adopt journals as the main venue of publication, and that chairs and deans should base hiring and promotion decisions on candidate's journal publication record as opposed to conference publications.a,b
While I share a lot of the sentiments and goals of the people critical of our publication culture, I disagree with the conclusion that we should transition to a classical journal-based model similar to that of other fields. I believe conferences offer a number of unique advantages that have helped make computer science dynamic and successful, and can continue to do so in the future.
First, let us acknowledge that no peer-review publication system is perfect. Reviewers are inherently subjective and fallible, and the amount of papers being written is too large to allow as careful and thorough review of each submission as should ideally be the case. Indeed, I agree with many of the critiques leveled at computer science conferences, but also think these critiques could apply equally well to any other peer-reviewed publication system. That said, there are several reasons I prefer conferences to journals:
Related to the last point, it is worthwhile to mention the NIPS 2014 experiment, where the program chairs, Corinna Cortes and Neil Lawrence, ran a duplicate refereeing process for 10% of the submissions, to measure the agreement in the accept/reject decisions. The overall agreement was roughly 74% (83% on rejected submissions and 50% on accepted ones, which were approximately one-quarter of the total submissions) and preliminary analysis suggests standard deviations of about 5% and 13% in the agreement on rejection and acceptance decisions respectively.c These results are not earth-shatteringprior to the experiment Cortes and Lawrence predicted an agreement of 75% and 80% (respectively)and so one interpretation is they simply confirm what many of us believethat there is a significant subjective element to the peer review process. I see this as yet another reason to favor venues with rotating gatekeepers.
Are conferences perfect? Not by a long shotfor example, I have been involved in discussionsd on how to improve the experience for participants in one of the top theory conferences and I will be the first to admit that some of these issues do stem from the publication-venue role of the conferences. The reviewing process itself can be improved as well, and a lot of it depends on the diligence of the particular program chair and committee members.
The boundaries between conferences and journals are not that cut and dry. A number of communities have been exploring journal-conference "hybrid" models that can be of great interest. My sense is that conferences are better at highlighting the works that are of broad interest to the community (a.k.a. "reviewing" the paper), while journals do a better job at verifying the correctness and completeness of the paper (a.k.a. "refereeing"), and iterating with the author to develop more polished final results.
I completely agree with many critics of our publication culture that we can and should be thinking of ways to improve it.
These are two different goals and are best achieved by different processes. For selecting particular works to highlight, comparing a batch of submissions by a panel of experts relying on many short reviews (as is the typical case in a conference) seems to work quite well. But fewer deeper reviews, involving a back-and-forth between author and reviewer (as is ideally the case in a journal) are better at producing a more polished work, and one in which we have more confidence in its correctness. We can try to find ways to achieve the best of both worlds, and make the most efficient use of the community's attention span and resources for refereeing. I personally like the "integrated journal/conference" model where a journal automatically accepts papers that appeared in certain conferences, jumping straight into the revision stage, which can involve significant interaction with the author. The advantage is that by outsourcing the judgment of impact and interest to the conference, the journal review process avoids redundant work and can be focused on the roles of verifying correctness and improving presentation. Moreover, the latter properties are more objective, and hence the process can be somewhat less "adversarial" and involve more junior referees such as students. In fact, in many cases these referees could dispense with anonymity and get some credit in print for their work.
Perhaps the biggest drawback of conferences is the cost in time and resources to attend them. This is even an issue for "top tier" conferences, where this effort at least pays off for attendees who get to hear talks on exciting new works as well as connect with many others in their community. But it is a greater problem for some lower-ranked conferences where many participants only come when they present a paper, and in such a case it may indeed have been better off if those papers appeared in a journal. In fact, I wish it were acceptable for researchers' work to "count" even if it appeared in neither a conference nor a journal. Some papers can be extremely useful to experts working in a specific field, but have not yet advanced to a state where they are of interest to the broader community. We should think of ways to encourage people to post such works online without spending resources on refereeing or travel. While people often lament the rise of the "least publishable unit," there is no inherent harm (and there is some benefit) in researchers posting the results of their work, no matter how minor they are. The only problem is the drain on resources when these incremental works go through the peer review process. Finally, open access is of course a crucial issue and I do believee both conferences and journals should make all papers, most of which represent work supported by government grants or non-profit institutions, freely available to the public.
To sum up, I completely agree with many critics of our publication culture that we can and should be thinking of ways to improve it. However, while doing so we should also acknowledge and preserve the many positive aspects of our culture, and take care to use the finite resource of quality refereeing in the most efficient manner.
a. Moshe Vardi, Editor's letter, Communications (May 2009); http://bit.ly/1UngC33
b. Lance Fortnow, "Time for Computer Science to Grow Up," Communications (Aug. 2009); http://bit.ly/1XQ6RrW
c. See the March 2015 blog post by Neil Lawrence: http://bit.ly/1pK4Anr
d. See the author's May 2015 blog post: http://bit.ly/1pK4LiF
e. See the author's December 2012 blog post: http://bit.ly/1UcYdFF
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
I wholeheartedly disagree. I also can see some good points you make (i.e. the rotating "gatekeepers"), but I don't see a clear reason why our model is really better. Quite to the contrary, I believe that the double role of our conferences (network/new results vs. publication) harms both.
1. We do not really include the whole community at conferences but reject a lot of papers and thereby people. I just cannot believe that for many conferences 80% of the submissions are not interesting enough to be discussed at the conference. These rejection rates also lead to papers being resubmitted, sometimes several times, which in turn leads to very old results being presented. I'd rather see the new stuff.
2. Conference reviews are a one-shot thing. There is no way to really get into a discussion with the reviewers as in a journal. Rebuttals don't really solve this. And I don't see the advantages of hybrid models.
So while journals are not perfect either, a stronger focus on them would be an improvement for the whole community.
I think this is a classic example of arguing about the solution without defining the problem. As I see it, the problem that we need to address has (at least) 4 dimensions:
1. We need a means for researchers to publish unfinished work, which may or may not be of 'publishable' quality so that they can get comments from and discuss this with their peers. In some disciplines, this is accomplished through conference publication.
2. We have an overload of published papers - far too many for active researchers in a field to read - so we need to have fewer, better quality, more definitive publications. For example, rather than a PhD student publishing 4 or 5 papers in the course of their work, they should publish a single paper at the end. Of course, reducing the number of publications has significant implications for hiring, tenure and promotion processes. This is a problem for all scientific disciplines, not just computer science.
3. We need mechanisms for social, face to face interactions within the research community to support community building. Conferences and workshops play this role and it may be difficult for people to get funding for more informal replacement activities.
4. We need to ensure that research without excessive costs. Sadly, many journals by commercial publishers are very expensive and have restricted open-access policies. Publishing more in such journals rather than in open-access conferences would be doing the community a disservice.
The current system is not, in my view, working in a number of important respects and this makes it harder to pursue research that is unfashionable, unusual and which does not conform with the expectations of the community. So, please let's have a discussion about the problems that we face rather than premature arguments about the most appropriate solutions.
Displaying all 2 comments