At the most recent Snowbird conference, where the chairs of computer science departments in the U.S. meet every two years, there was a plenary session during which the panelists and audience discussed the peer review processes in computing research, especially as they pertain to a related debate on conferences versus journals. It's good to go back to first principles to see why peer review matters, to inform how we then would think about process.
In research we are interested in discovering new knowledge. With new knowledge we push the frontiers of the field. It is through excellence in research that we advance our field, keeping it vibrant, exciting, and relevant. How is excellence determined? We rely on experts to distinguish new results from previously known, correct results from incorrect, relevant problems from irrelevant, significant results from insignificant, interesting results from dull, the proper use of scientific methods from being sloppy, and so on. We call these experts our peers. Their/ our judgment assesses the quality and value of the research we produce. It is important for advancing our field to ensure we do high-quality work. That's why peer review matters.
In science, peer review matters not just for scientific truth, but, in the broader context, for society's perception of science. Peer review matters for the integrity of science. Scientific integrity is the basis for public trust in us, in our results, in science. Most people don't understand the technical details of a scientific result, let alone how it was obtained, what assumptions were made, in what contexts the result is applicable, or what practical implications it has. When they read in the news that "Scientists state X," there is an immediate trust that "X" is true. They know that science uses peer review to vet results before they are published. They trust this process to work. It is important for us, as scientists, not to lose the public trust in science. That's why peer review matters.
"Public" includes policymakers. Most government executives and congressional members are not scientists. They do not understand science, so they need to rely on the judgment of experts to determine scientific truth and how to interpret scientific results. We want policymakers in the administration and Congress to base policy decisions on facts, on evidence, and on data. So it is important for policymakers that, to the best of our ability, we, as scientists, publish results that are correct. That's why peer review matters.
While I argue peer review matters, it's a whole other question of what the best process is for carrying out peer review. In this day and age of collective intelligence through social networks, we should think creatively about how to harness our own technology to supplement or supplant the traditional means used by journals, conferences, and funding agencies. Peer review matters, and now is the time to revisit our processesnot just procedures and mechanisms, but what it is we review (papers, data, software, and tools), our evaluation criteria, and our incentives for active participation.
It is important for us, as scientists, not to lose the public trust in science. That's why peer review matters.
I think we must continue to educate our students and the public about truth. Even if a research paper is published in the most respectable venue possible, it could still be wrong. Conventional peer review is essentially an insider game: It does nothing against systematic biases.
In physics, almost everyone posts his papers on arXiv. It is not peer review in the conventional sense. Yet, our trust in physics has not gone down. In fact, Perelman proved the Poincaré conjecture and posted his solution on arXiv, bypassing conventional peer review entirely. Yet, his work was peer reviewed, and very carefully.
We must urgently acknowledge that our traditional peer review is an honor-based system. When people try to game the system, they may get away with it. Thus, it is not the gold standard we make it out to be.
Moreover, conventional peer review puts a high value in getting papers published. It is the very source of the paper-counting routine we go through. If it was as easy to publish a research paper as it is to publish a blog post, nobody would be counting research papers. Thus, we must realize that conventional peer review also has some unintended consequences.
Yes, we need to filter research papers. But the Web, open source software, and Wikipedia have shown us that filtering after publication, rather than before, can work too. And filtering is not so hard.
Filtering after publication is clearly the future. It is more demanding from an IT point of view. It could not work in a paper-based culture. But there is no reason why it can't work in the near future. And the Perelman example shows that it already works.
Peer review publications have been around scientific academic scholarship since 1665, when the Royal Society's funding editor Henry Oldenburg created the first scientific journal. As Jeannette Wing nicely argued in her "Why Peer Review Matters" post, it is the public, formal, and final archival nature of the process of the Oldenburg model that established the importance of publications to scientific authors, as well as their academic standings and careers.
Recently, as the communication of research results reaches breakneck speeds, some have argued that it is time to fundamentally examine the peer review model, and perhaps to modify it somewhat to suit the modern times. One such proposal recently posed to me via email is open peer review, a model not entirely unlike the Wikipedia editing model in many ways. Astute readers will realize the irony of how the Wikipedia editing model makes academics squirm in their seats.
The proposal for open peer review suggests that the incumbent peer review process has problems in bias, suppression, and control by elites against competing non-mainstream theories, models, and methodologies. By opening up the peer review system, we might increase accountability and transparency of the process, and mitigate other flaws. Unfortunately, while we have anecdotal evidence of these issues, there remains significant problems in quantifying these flaws with hard numbers and data, since reviews often remain confidential.
Perhaps more distressing is that several experiments in open peer review (such as done by Nature in 2006, British Medical Journal in 1999, and Journal of Interactive Media in Education in 1996) have had mixed results in terms of the quality and tone of the reviews. Interestingly, and perhaps unsurprisingly, many of those who are invited to review under the new model decline to do so, potentially reducing the pool of reviewers. This is particularly worrisome for academic conferences and journals, at a time when we desperately need more reviewers due to the growth of the number of submissions.
A competing proposal might be open peer commentary, which elicits and publishes commentary on peer-reviewed articles. This can be done prior to publication or after the date of publication. In fact, recent SIGCHI conferences have already started experimenting with this idea, with several popular paper panels in which papers are first presented, and opinions from a panel is openly discussed with an audience. The primary focus here is to increase participation, while also improve transparency. The idea of an open debate, with improved transparency, is of course the cornerstone of the Wikipedia editing model (and the PARC research project WikiDashboard).
Finally, it is worth pointing out the context under which these proposals might be evaluated. We live in a different time than Oldenburg. In the mean time, communication technology has already experienced several revolutions of gigantic proportions. Now, realtime research results are often distributed, blogged, tweeted, Facebooked, Googled, and discussed in virtual meetings. As researchers, we can ill-afford to stare at these changes and not respond.
Beyond fixing problems and issues of bias, suppression, and transparency, we also need to be vigilant of the speed of innovation and whether our publication processes can keep up. Web review-management systems like PrecisionConference have gone a long way in scaling up the peer-review process. What else can we do to respond to this speed of growth yet remain true to the openness and quality of research?
©2011 ACM 0001-0782/11/0700 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.
No entries found