There has much discussion on Twitter, Facebook, and in blogs about problems with the paper reviewing system for HCI systems papers (see Landay's blog post and the resulting comment thread). Unlike papers on interaction methods or new input devices, systems are messy. You can't evaluate a system using a clean little lab study, or show that it performs 2% better than the last approach. Systems often try to solve a novel problem for which there was no previous approach. The value of these systems might not be quantified until they are deployed in the field and evaluated with large numbers of actual users. Yet doing such evaluation incurs a significant amount of time and engineering work, particularly compared to non-systems papers. The result, observed in conferences like CHI and UIST, is that systems researchers find it very difficult to get papers accepted. Reviewers reject messy systems papers that don't have a thorough evaluation of the system, or that don't compare the system against previous systems (which were often designed to solve a different problem).
At CHI 2010 there was an ongoing discussion about how to fix this problem. Can we create a conference/publishing process that is fair to systems work? Plans are afoot to incorporate iterative reviewing into the systems paper review process for UIST, giving authors a chance to have a dialogue with reviewers and address their concerns before publication.
However, I think the first step is to define a set of reviewing criteria for HCI systems papers. If reviewers don't agree on what makes a good systems paper, how can we encourage authors to meet a standard for publication?
Here's my list:
- Clear and convincing description of the problem being solved. Why isn't current technology sufficient? How many users are affected? How much does this problem affect their lives?
- How the system works, in enough detail for an independent researcher to build a similar system. Due to the complexities of system building, it is often impossible to specify all the parameters and heuristics being used within a 10-page paper limit. But the paper ought to present enough detail to enable another researcher to build a comparable, if not identical, system.
- Alternative approaches. Why did you choose this particular approach? What other approaches could you have taken instead? What is the design space in which your system represents one point?
- Evidence that the system solves the problem as presented. This does not have to be a user study. Describe situations where the system would be useful and how the system as implemented performs in those scenarios. If users have used the system, what did they think? Were they successful?
- Barriers to use. What would prevent users from adopting the system, and how have they been overcome?
- Limitations of the system. Under what situations does it fail? How can users recover from these failures?
What do you think? Let's discuss.
Tessa Lau is a Research Staff Member and Manager at IBM's Almaden Research Center.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment