I've been saying for a while that there's a pretty big mismatch right now between what everyday people need with respect to computer security and what the computer security community, both research and industry, are actually doing.
My ammunition comes from Microsoft's Security Intelligence Report, which presents an overview of "the landscape of exploits, vulnerabilities, and malware" for the first half of 2011.
The report presents a number of fascinating findings. For example:
The reason Microsoft's report is important is because it offers actual data on the state of software vulnerabilities, which gives us some insight as to where we as a community should be devoting our resources. As one specific example, if we could teach people to avoid obviously bad websites and bad software, and if AutoRun were fixed or just turned off, we could avoid well over 80% of malware attacks seen today.
However, there's a big mismatch right now between what the data says about the vulnerabilities and what kind of research is being done and what kind of products are being offered. For example, there are at most a handful of research papers published on the user interaction side of protecting people from vulnerabilities, compared to the 500+ research papers listed in the ACM Digital Library on (admittedly sexier) zero-day attacks.
This isn't a mismatch just in computer research. Just go to any industry trade show, and try to count the number of companies that have a real focus on end users. No, not network admins or software developers, I mean actual end users. You know, the people that try to use their computers to accomplish a goal, rather than as a means toward that goal, like accountants, teachers, lawyers, police officers, secretaries, administrators, and so on. The last time I went to the RSA conference, I think my count was two (though to be honest, I may have been distracted by the sumo wrestler, the scorpions, and the giant castle run by NSA).
Now, I don't want to understate the very serious risks of popular themes in computer security research and products made by industry. Yes, we still do need protection from zero-day attacks and man-in-the-middle attacks, and we still need stronger encryption techniques and better virtual machines.
My main point here is that attackers have quickly evolved their techniques toward what are primarily human vulnerabilities, and research and industry have not adapted as quickly. For computer security to really succeed in prac- tice, there needs to be a serious shift in thinking, to one that actively includes the people behind the keyboard as part of the overall system.
A few months back, Bertrand Meyer wrote about the nastiness problem in computer science, questioning whether we as reviewers are "malevolent grumps." Judging by the user comments on the page, this hit a nerve with readers who were the victims of such grumpiness! Jeannette Wing then followed up on this with some numbers from NSF grant rejections that did indeed indicate that computer scientists are hypercritical. Much as I enjoy the colorful phrasing, I feel that a field full of malevolent grumps is not something we should simply accept. In fact, even if there are only a few grumps out there, it's in all our interests to civilize them.
So what can computer scientists do to reduce the nastiness problem when reviewing? Reviewers, authors, program committee members, conference chairs, and journal editors can all do their bit by simply refusing to tolerate discourtesy. Let's embrace the rule: We no longer ignore bad behavior. As reviewers, we can aim to be polite (yet stringent) ourselves but also to point out to co-reviewers if we find their impoliteness unacceptable. As authors, we do not have to accept a rude review and just lie down to lick our wounds. We can (politely!) raise the issue of rudeness with the program chair or editor so it is less likely to occur in the future. As editors, chairs, and program committee members, we can include the issue of courtesy in the reviewing guidelines and be firm about requesting reviewers to moderate their tone if we notice inappropriate remarks.
One of the first steps is to separate intellectual rigor from discourtesy. It is possible to be critical without being rude or dismissive. We can maintain standards in the field without resorting to ill-natured comments. (Believe it or not, it is also possible to ask genuine questions at a conference without seeking to show off one's own intellectual chops, but that is another matter). The purpose of reviewing, in my view, is to help an author improve their work, not to crush them under the weight of your own cleverness. It's not the author's fault that you had a bad day, or that some other reviewer just rejected your own paper.
Of course, there are some pockets of good reviewing practice within the field that we can draw on. I am sure there are many, but I have chosen CHI because I have been writing for it recently. The CHI conference is one of the biggest, most well-respected annual human computer interaction conferences. In 2011, there were 2,000 attendees from 38 countries. This year there were 1,577 paper submissions with a 23% acceptance rate. This was the first year I submitted papers to it, and I have been impressed by the quality of the reviews in terms of their fairness, constructiveness, and level of detail. They contained greater insight and intellectual oomph than the reviews I had from a high-impact journal recently. For one of my CHI submissions, the reviewers did not agree with the paper on some pointsit is on a controversial topicbut they still offered suggestions for how to resolve these issues rather than simply rejecting the paper. Was I just lucky in the reviewers I was allocated? Possibly, but the CHI reviewing process has some interesting features built in to maintain review quality.*
This is a fairly heavyweight process, but if conference organizers adopted even just one more of the practices from points 15, or if journal editors added a courtesy clause to their review instructions, the world would be a slightly better place.
©2013 ACM 0001-0782/13/06
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from firstname.lastname@example.org or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
No entries found