News
Computing Profession

Researchers Rev ­p Review of Peer Review

Posted
Examining peer review.
Considering peer review, one of the cherished foundations of scientific advancement.

Peer review is one of the cherished foundations of scientific advancement. Whether a researcher is a biologist, a chemist, a computer scientist, or even a political scientist or sociologist, the premise that subject matter experts in a given field have read and critiqued a paper for its contributions to that field before it is published is absolutely sacrosanct.

However, a growing cohort of researchers in the vanguard of studying the methodology behind scientific advancement—the "science of science," if you will—has noted the peer review process is still mired in an almost ad hoc mode; though the global scientific community is becoming more diverse, developing ways to make certain peer reviewers are true "peers" is lagging behind.

"I think there is a responsibility for the scientific community here," said Cassidy Sugimoto, associate professor of informatics at Indiana University. "I've been to too many editors' conferences where I get a speech and I see the confidential files they give out to their editors about the people who are submitting and the people who are getting accepted and the people who are reviewing—and some of that data is really terrifying."

For Sugimoto, the key to improving the peer review process rests on accessibility to data about those who submit their work to journals and those who perform reviews for those publications.

"Suppression of those data has incredibly negative effects on the scientific system," Sugimoto said, "so we need to stop being afraid we will look bad, because we have to understand every journal is going to look like that right now. Let's disclose that, let's show that, and then reward journals that start doing better, that show improvements in the diversity and composition of their reviewers, that show their gatekeepers look like their authors. If we really believe in peer review as the cornerstone of the scientific system, then we have to go about it from an evidence basis, and we're not doing that right now."

Ways and Wills are Coalescing

Sugimoto is doing more than talking about improving the evidence base for analyzing peer review; she is actively researching data on gender and nationality in peer review, and is program director for the U.S. National Science Foundation's Science of Science and Innovation Policy (SciSIP) Directorate.

One of the newest projects administered by Sugimoto's office is a three-year, $531,000 grant awarded to Daniel Acuna, assistant professor of information studies at Syracuse University. Acuna and his colleagues will investigate ways to quantify possible reviewer bias, assess techniques to estimate potential reviewers' characteristics, and ultimately test techniques that can automatically propose a review panel that is balanced in terms of reviewer choice, timing, and quality of the process.

Acuna said one of the elements separating his work from previous projects is an agreement with an organization that manages a variety of publication platforms to allow him to analyze across them, rather than be locked into a journal-by-journal methodology.

"They have an open access journal, they have journals that are editor-driven, journals that are very high-impact, very prestigious, and entirely managed by scientists," Acuna said. "They want to understand. This group is very open to collaboration, and we are very thankful for their willingness."

eLife, published by the Howard Hughes Medical Institute, the Max Planck Society, and Wellcome Trust, is another journal at the vanguard of making peer reviewers' data available to research—Sugimoto's recent work analyzed eLife data, for instance. The journal's executive director, Mark Patterson, said making a wide variety of metadata available to those striving to improve the scientific process was among the intentions of the founding agencies.

"They wanted to create a journal where our mandate is to experiment, to try out new approaches, to share what we find," Patterson said. "Not necessarily to be the best at everything, but to try different approaches that will make publishing new findings more effective, more reliable, and more rapid; all the things that it would need to be to serve science better. They actually had a view there was a lot of room for improvement in the existing system, because the scientists they fund kept telling them that.

"The perception is, there is a very unidimensional approach to assessing research by asking the question 'What journals have you published your work in?' not 'What have you done, what impact has it had, have other people used it and built on it, how reliable did it turn out to be, did it stand the test of time?' and those sorts of important questions don't get asked. The perception is that all that matters is getting your papers published in high-profile journals. There is more and more talk and recognition this is something that needs to be reformed."

To that end, eLife is not only making peer reviewers' data available to researchers, or publishing studies on data made available by other platforms, but it is also conducting a trial in which the yay-or-nay gatekeeping aspect of peer review is eliminated–once the editor has invited an article for full peer review, the journal is committed to publishing it, along with the reviewer reports, the decision letter, and the author response.

"By removing the gatekeeping role of reviewers, the peer review process can focus on how the work can be strengthened," Patterson and eLife Editor-in-Chief Randy Schekman wrote in announcing the trial. "Reviewers will know that it is very likely that their comments will be published and they will have an opportunity to gain recognition for well-crafted and thoughtful advice."

How Much Data Should be Open?

Ideally, for the scientific community at large, reviewers' demographic information can be shared with their reports, which can facilitate the recognition Patterson and Schekman mentioned, as well as supplying both researchers and editors with the data necessary to discover and reduce possible reviewer bias. The difficulty in capitalizing on such data, however, is the existing cultural context around publishing; some reviewers may balk at making identifiable data available due to concerns they will be "judged for judging."

"I am very concerned with power dynamics in academia," Sugimoto said. "Imagine that a young black woman gets asked to do a review, and she is maybe a doctoral student or a post-doc. She is going on the job market and is asked to do a review of a senior in the field at a lab where she wants to get a job. What happens if that peer review is open? If it's too nice, people will think she's not intelligent enough to be critical. If it's too critical, she'll lose her chances at her job. There are consequences to the way in which she writes that review. There are a lot of socio-political kinds of things that happen when you open peer review, and in the early years of this, we have to be careful of the consequences that it will have on the individuals, that it doesn't become personal. Peer review is supposed to be as neutral and objective as possible. I am not sure openness actually improves that neutrality, or if it makes it more of a political and social tool, in which you can wield your power, your individual reputation, in ways you couldn't when it was anonymized."

Peer review research may be able to take a lesson from healthcare in how to facilitate making some data open while protecting identities, allowing research to be conducted across multiple institutions. However, just as cross-institutional legal implications and incompatible technology platforms have hindered data-intensive healthcare research, peer review faces similar problems, according to Peter Flach, professor of artificial intelligence at the University of Bristol, U.K. Flach, working with colleague Simon Price, developed one of the first tools to analyze peer review, SubSift, which matches submitted conference or journal papers to potential peer reviewers based on their similarity to published works of prospective reviewers in online bibliographic databases.

"There is a tension between keeping things local, local to the journal or local to the conference, which is what we are currently essentially doing," Flach said. "But there are actually many issues that can only be solved if we start having some global system that records reviewing activities of people, but also records trajectories papers have taken through the ecosystem of conferences and journals and so on. I am skeptical such a system will actually happen, and it also has obvious downsides—the perception that 'big brother is watching' and who is maintaining the system, and there is a lot of sensitive information in there."

In addition to the sensitivity around personal data, Flach, who serves as editor-in-chief of journal Machine Learning, said the institutional affiliations of journals might make it difficult to create any kind of universally accepted peer review analysis platform.

"If I take my own journal as an example, it used to be published by Kluwer, and Kluwer was bought by Springer. And now Springer has merged with Nature; these are massive organizations. We use a tool called Editorial Manager that a lot of journals use, but that's from an external vendor. Then, maybe, the vendor wants to keep their tool general, but I want it to provide features that are good for my journal, but not necessarily for somebody else.

"I know what features I would like, and I let my students and post-docs implement some of these, but I am not in a position to integrate them with the whole system, and even Springer Nature doesn't own the system. So there are a lot of organizational drivers in the ecosystem that make this a complicated issue."

Cross-disciplinary Cooperation in the Offing

Computer scientists are not the only researchers taking the lead in creating a data-rich environment for peer review analysis. Social scientists such as Wake Forest University political science professor Justin Esarey are also using models and simulation to try to discern the most impartial and efficient methods of peer review and paper evaluation.

"Peer review is a social process," Esarey said. "It's performed by people. It involves institutions, rules, and norms that govern how it works, so for those reasons I think it's appropriate for social scientists to study how it works. It's obviously something central to our professional lives as scientists."

One factor that might make social sciences particularly valuable in creating computational frameworks around peer review analysis is that the perceived differences in what is considered proficient research are more pronounced, according to Esarey and Sugimoto, and this greater variability of opinion may make discerning signal from noise a bit easier early on.

"Within the natural sciences, at least inside of any discipline, there is typically pretty high agreement on what constitutes good work, whereas in the social sciences it's much less likely people agree on what constitutes good work, because they are applying different standards and methodological lenses," Esarey explained. "For that reason, peer review is more likely to be a meaningful, consequential institution."

Said Sugimoto, "I think the more we start talking across disciplinary boundaries, we understand that that which we take for granted in fields varies so dramatically. And I think there can be a sense of norming that can happen across disciplines. When you start thinking 'wait, why do we accept this kind of behavior, why do we do these kinds of things?' and the more we see experimentation happen in other fields, the more likely we are to experiment on our own. I just hope we are seeing a growing scienticity around peer review and I hope that will lead to changes."

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More