Opinion
Computing Applications Viewpoint

Technology, Conferences, and Community

Considering the impact and implications of changes in scholarly communication.
Posted
  1. Introduction
  2. Technology and a Professional Organization Drove the Shift to Conference Publication
  3. Knock-on Effects
  4. Conference Selectivity and Effects on Reviewing
  5. Impact on Community
  6. Possible Directions
  7. References
  8. Author
  9. Footnotes
  10. Tables
conference papers

In 2009 and 2010, over a dozen Communications Editor’s Letters, Viewpoints, blog entries, reader letters, and articles addressed our conference and journal publication culture. The discussion covered the shift from a traditional emphasis on journals to the current focus on conferences, and the challenges of conference reviewing, but at its heart is our sense of community.2 One commentary spoke of a “death spiral” of diminishing participation.1 Several of the contributors joined a plenary panel on peer review at the 2010 Computing Research Association Conference at Snowbird.4

In a nutshell, the commentaries note that a focus on conference publication has led to deadline-driven short-term research at the expense of journal publication, a reviewing burden that can drive off prominent researchers, and high rejection rates that favor cautious incremental results over innovative work. Some commentators identify novel approaches to addressing these or other problems, but the dominant view is that we should return to our past practice of regarding journal publication as the locus of quality, which remains the norm in other sciences.

To understand whether this is possible, and I doubt it is, we must understand why computer science in the U.S. shifted to conference publication in the first place. As commentators have noted, it was not simply that computer science requires quick dissemination of results: Conferences did not become focused on quality in Europe or Asia, or in other competitive, quickly evolving fields such as neuroscience or physics. It is not that U.S. computer science could not expand journal page counts: computer science journals abroad expanded, passing the costs on to libraries. This Viewpoint considers other factors and outlines their implications.

Back to Top

Technology and a Professional Organization Drove the Shift to Conference Publication

By the early 1980s, the availability of text editing or word processing among computer scientists enabled the relatively inexpensive production of decent-looking proceedings prior to a conference. This was something new. Anticipating that libraries might shelve proceedings, ACM printed many more copies than conferences needed, at a low incremental cost. ACM also made them available by mail order after a conference at a very low price. Papers in ACM conferences were thus widely distributed and effectively archival. These are the two features that motivated the creation of journals centuries earlier.

Proceedings in Europe and Asia rarely had after-conference distribution, so to be archived, work there had to progress to journal publication. The shift to a conference focus did not occur. In 2004, a prominent U.K. researcher wrote about the CHI conference: “HCI’s love of conferences is a fluke of history. We all know this. CS in general has suffered from it, but is steadily moving away. CHI however digs in, with more and more death rattles such as CHI Letters. Being conference centered is bad for any field: bad for its archival material, bad for its conferences, and worst of all, really bad for the respect that we command with other communities. SIGCHI needs to move away from bolstering up conference publications. It needs to use journals for journal stuff and conferences for conference stuff.”a

He was wrong about the direction of computer science, and at least premature in diagnosing CHI’s expiration. The point, though, is that he saw the problem as an American problem, affecting CHI but not European HCI.

Back to Top

Knock-on Effects

This change in the complex ecology of scholarly communication was followed by a slow sequence of adjustments. ACM and IEEE had considered conference papers to be ephemeral, and expressly allowed verbatim or minimally revised republication in journals and transactions. With proceedings effectively archived even before digital libraries arrived, this policy was formally ended early in the 1990s.

A significant consequence is that it is increasingly difficult to evolve conference papers into journal articles. Publishers, editors, and reviewers expect considerable new work, even new data, to avoid a charge of self-plagiarism. Republishing the same work is undesirable, but we have inhibited the use of review and revision cycles to clean up conference papers, expand their literature reviews, and engage in the deeper discussions that some feel are being lost.

The pattern extends beyond systems. I edited ACM Transactions on Computer-Human Interaction and serve on the editorial boards of Human-Computer Interaction, Interacting with Computers, and ACM Computing Surveys. By my estimation, no more than 15% of the work published in highly selective HCI conferences later appears in journals. Journal publication is not a prerequisite for being hired into leading research universities. Today, the major U.S. HCI journals mostly publish work from Europe and Asia, where conferences are less central.

Now let’s consider reviewing, a primary focus of discussion, before turning to the impact of these changes on our sense of community.


When conferences became archival, it was natural to focus on quality and selectivity.


Back to Top

Conference Selectivity and Effects on Reviewing

In other fields, journals focus on identifying and improving research quality; large conferences focus on community building and community maintenance; and workshops or small conferences focus on member support through specialist discussions of work in progress. This reflects Joseph McGrath’s division of group activities into those focused on production, team health, and member support.3

When conferences became archival, it was natural to focus on quality and selectivity. Even with authors preparing camera-ready copy, the expense of producing a proceedings was proportional to its page count. Libraries sales were a goal prior to the emergence of digital libraries in the late 1990s. Libraries were more likely to shelve thinner proceedings, and needed to be convinced the work had lasting value. These pressures drove down conference acceptance rates. In my field they dropped from almost 50% to 15% before settling in a range, 20%–25%, that is acceptably selective to academic colleagues yet not brutally discouraging to authors, we hope.

But it is discouraging to have submissions rejected. I know few if any people who submit with no hope of acceptance. In most fields, conferences accept work in progress. It is also discouraging when we see a paper presented and immortalized in the digital library that seems less worthy than a paper that was rejected. Review processes are noisy, and more so as the reviewer pool expands to include graduate students and others. Multidisciplinary fields, with diverse methodologies and priorities, deliver especially random outcomes.

Previous commentaries emphasized that caution and incrementalism fare better than innovation and significance in conference assessments. An incremental advance has a methodology, a literature review, and a rationale for publication that were bulletproofed in the papers it builds on. We try to channel papers to the most expert reviewers in an area, but to them incremental advances loom larger than they will to others. With pressure to reject ~75% and differing views of what constitutes significant work, the minor flaws or literature omissions that inevitably accompany novel work become grounds for exclusion. And in a zero-sum game where conference publication leads to academic advancement, a novel paper can be a competitive threat to people and paradigms, echoing concerns about journal conservatism in other fields.

Birman and Schneider describe the risk of a “death spiral” when senior people cease to review.1 Although engaging young researchers as reviewers is great, they often feel more comfortable identifying minor flaws and less comfortable in declaring that work is more or less important. Every year, conferences in my area adjust the review process. But significant change is elusive, given the forces I have described.

Back to Top

Impact on Community

A leading neuroscientist friend described the profession’s annual meeting as a “must-attend” event “where people find out what is going on.” There are 15,000 presentations and 30,000 attendees. The quality bar is low. It is a community-building effort in a journal-oriented field.

In contrast, despite tremendous growth in many CS specializations, attendance at many of our conferences peaked or plateaued long ago. So has SIG membership, as shown in the accompanying table. Conferences proliferate, dispersing organizational effort and the literature, reducing a sense of larger community.

In my field, CHI once had many vibrant communication channels—a highly regarded newsletter, an interactive email discussion list, passionate debates in the halls and business meetings of conferences, discussants for paper sessions, and in the late 1990s an active Web forum. All of them disappeared. The CHI conference gets more submissions, but attendance peaked years ago. When a small, relatively polished subset of work is accepted, what is there to confer about?

High rejection rates undermine community in several ways. People don’t retain quite the same warm feeling when their work is rejected. Without a paper to give, some do not receive funding to attend. Rejected work is revised and submitted to other conferences, feeding conference proliferation, diverting travel funding, and dispersing volunteer efforts in conference management and reviewing. In addition, high selectivity makes it difficult for people in related fields to break in—especially researchers from journal-oriented fields or countries who are not used to polishing conference submissions to our level.

A further consequence is that computer scientists do not develop the skills needed to navigate large, community-building conferences. At our conferences, paper quality is relatively uniform and the number of parallel sessions small, so we can quickly choose what to attend. In contrast, randomly sampling sessions at a huge conference with 80% acceptance leads us to conclude that it is a junk conference. Yet with a couple hours of preparation, combing the many parallel sessions for topics of particular interest, speakers of recognized esteem, and best paper nominations, and then planning meetings during some sessions, one can easily have as good an experience as at a selective conference. But it took me a few tries to discover this.

Courtesy of Moore’s Law, our field enjoys a constant flow of novelty. If existing venues do not rapidly shift to accommodate new directions, other outlets will appear. Discontinuities can be abrupt. Our premier conference for many years, the National Computer Conference, collapsed suddenly two decades ago, bringing down the American Federation of Information Processing Societies (AFIPS), then the parent organization of ACM and IEEE. Over half of all ACM A.M. Turing Award winners published in the AFIPS conferences. Most of those published single-authored papers. Yet the AFIPS conference proceedings disappeared, until they were recently added to the ACM Digital Library. The field moved on—renewal is part of our heritage. But perhaps we can smooth the process.

Back to Top

Possible Directions

Having turned our conferences into journals, we must find new ways to strengthen community. Rolling back the clock to the good old heyday of journals, ignoring changes wrought by technology and time, seems unlikely to happen. For one thing, it would undermine careers built on conference publication. More to the point, computer science in the U.S. responded first to technologies that enable broad dissemination and archiving. Other countries are now following; other disciplines will also adapt, one way or another. Instead of looking back, we can develop new processes and technologies to address challenges that emerged from exploiting the technologies of the 1980s.

With storage costs evaporating, we could separate quality determination from participation by accepting most conference submissions for presentation and online access, while distinguishing ~25% as “Best Paper Nominations.” Making a major conference more inclusive could pull participation back from spin-off conferences.

A more radical possibility is inspired by the revision history and discussion pages of Wikipedia articles. Authors could maintain the history of a project as it progresses through workshop, conference, and journal or other higher-level accreditation processes. Challenges would have to be overcome, but such an approach might ameliorate reviewer load and multiple publication burdens—or might not.

We are probably not approaching the bottom of a “death spiral.” But when AFIPS and the National Computer Conference collapsed, the transition from profitable success to catastrophe was remarkably rapid. Let’s continue this discussion and keep history from repeating.

Back to Top

Back to Top

Back to Top

Back to Top

Tables

UT1 Table. Membership in the top 10 ACM Special Interest Groups in 1990, 2000, and 2010. Currently, only two of 34 SIGs have 3,000 members.

Back to top

    1. Birman, K. and Schneider, F.B. Program committee overload in systems. Commun. ACM 52, 5 (May 2009), 34–37.

    2. Fortnow, L. Time for computer science to grow up. Commun. ACM 52, 8 (Aug. 2009), 33–35.

    3. McGrath, J.E. Time, interaction, and performance (TIP): A theory of groups. Small Group Research 22, 2 (1991), 147–174.

    4. Peer review in computing research. CRA Conference at Snowbird, July 19, 2010; http://www.cra.org/events/snowbird-2010/.

    a. Gilbert Cockton, email communication, Jan. 22, 2004.

    DOI: http://doi.acm.org/10.1145/1897816.1897834

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More