http://cacm.acm.org/blogs/blog-cacm/144349-is-the-computer-security-community-barking-up-the-wrong-trees/fulltext
December 15, 2011
I’ve been saying for a while that there’s a pretty big mismatch right now between what everyday people need with respect to computer security and what the computer security community, both research and industry, are actually doing.
My ammunition comes from Microsoft’s Security Intelligence Report, which presents an overview of "the landscape of exploits, vulnerabilities, and malware" for the first half of 2011.
The report presents a number of fascinating findings. For example:
- Very few exploits actually use zero-day vulnerabilities. Microsoft’s Malicious Software Removal Tool found no major families of vulnerabilities using zero-day attacks. Microsoft’s Malware Protection Center also found that, of all exploits used, at most 0.37% of them used zero-day attacks. Here, zero-day is defined as a vulnerability where the vendor had not released a security update at the time of the attack.
- 44.8% of vulnerabilities required some kind of user action, for example clicking on a link or being tricked into installing the malware.
- 43.2% of malware detected made use of the AutoRun feature in Windows.
The reason Microsoft’s report is important is because it offers actual data on the state of software vulnerabilities, which gives us some insight as to where we as a community should be devoting our resources. As one specific example, if we could teach people to avoid obviously bad websites and bad software, and if AutoRun were fixed or just turned off, we could avoid well over 80% of malware attacks seen today.
However, there’s a big mismatch right now between what the data says about the vulnerabilities and what kind of research is being done and what kind of products are being offered. For example, there are at most a handful of research papers published on the user interaction side of protecting people from vulnerabilities, compared to the 500+ research papers listed in the ACM Digital Library on (admittedly sexier) zero-day attacks.
This isn’t a mismatch just in computer research. Just go to any industry trade show, and try to count the number of companies that have a real focus on end users. No, not network admins or software developers, I mean actual end users. You know, the people that try to use their computers to accomplish a goal, rather than as a means toward that goal, like accountants, teachers, lawyers, police officers, secretaries, administrators, and so on. The last time I went to the RSA conference, I think my count was two (though to be honest, I may have been distracted by the sumo wrestler, the scorpions, and the giant castle run by NSA).
Now, I don’t want to understate the very serious risks of popular themes in computer security research and products made by industry. Yes, we still do need protection from zero-day attacks and man-in-the-middle attacks, and we still need stronger encryption techniques and better virtual machines.
My main point here is that attackers have quickly evolved their techniques toward what are primarily human vulnerabilities, and research and industry have not adapted as quickly. For computer security to really succeed in prac- tice, there needs to be a serious shift in thinking, to one that actively includes the people behind the keyboard as part of the overall system.
Judy Robertson "We Needn’t be Malevolent Grumps"
http://cacm.acm.org/blogs/blog-cacm/144878-we-neednt-be-malevolent-grumps-in-2012/fulltext
December 31, 2011
A few months back, Bertrand Meyer wrote about the nastiness problem in computer science, questioning whether we as reviewers are "malevolent grumps." Judging by the user comments on the page, this hit a nerve with readers who were the victims of such grumpiness! Jeannette Wing then followed up on this with some numbers from NSF grant rejections that did indeed indicate that computer scientists are hypercritical. Much as I enjoy the colorful phrasing, I feel that a field full of malevolent grumps is not something we should simply accept. In fact, even if there are only a few grumps out there, it’s in all our interests to civilize them.
So what can computer scientists do to reduce the nastiness problem when reviewing? Reviewers, authors, program committee members, conference chairs, and journal editors can all do their bit by simply refusing to tolerate discourtesy. Let’s embrace the rule: We no longer ignore bad behavior. As reviewers, we can aim to be polite (yet stringent) ourselves but also to point out to co-reviewers if we find their impoliteness unacceptable. As authors, we do not have to accept a rude review and just lie down to lick our wounds. We can (politely!) raise the issue of rudeness with the program chair or editor so it is less likely to occur in the future. As editors, chairs, and program committee members, we can include the issue of courtesy in the reviewing guidelines and be firm about requesting reviewers to moderate their tone if we notice inappropriate remarks.
One of the first steps is to separate intellectual rigor from discourtesy. It is possible to be critical without being rude or dismissive. We can maintain standards in the field without resorting to ill-natured comments. (Believe it or not, it is also possible to ask genuine questions at a conference without seeking to show off one’s own intellectual chops, but that is another matter). The purpose of reviewing, in my view, is to help an author improve their work, not to crush them under the weight of your own cleverness. It’s not the author’s fault that you had a bad day, or that some other reviewer just rejected your own paper.
Of course, there are some pockets of good reviewing practice within the field that we can draw on. I am sure there are many, but I have chosen CHI because I have been writing for it recently. The CHI conference is one of the biggest, most well-respected annual human computer interaction conferences. In 2011, there were 2,000 attendees from 38 countries. This year there were 1,577 paper submissions with a 23% acceptance rate. This was the first year I submitted papers to it, and I have been impressed by the quality of the reviews in terms of their fairness, constructiveness, and level of detail. They contained greater insight and intellectual oomph than the reviews I had from a high-impact journal recently. For one of my CHI submissions, the reviewers did not agree with the paper on some points—it is on a controversial topic—but they still offered suggestions for how to resolve these issues rather than simply rejecting the paper. Was I just lucky in the reviewers I was allocated? Possibly, but the CHI reviewing process has some interesting features built in to maintain review quality.*
- In the guidelines for reviewers, courtesy is explicitly mentioned: "please be polite to authors. Even if you rate a paper poorly, you can critique it in a positive voice. As part of polite reviewing practice, you should always state what is good about a paper first, followed by your criticisms. If possible, you should offer suggestions for improvement along with your criticism."
- Authors can select both the subcommittee and the contribution type for a paper, which maximizes the chance that the paper will end up with reviewers with appropriate expertise, and that the reviewers will use criteria appropriate to the paper when assessing its suitability (e.g., not insisting on empirical evidence for a theoretical contribution).
- The reviewing process is thorough and has several opportunities for unfairness or discourtesy to be weeded out. Each paper is blind-reviewed by three or more experts, and then an associate chair writes a meta-review to summarize the assessment of the paper, and what action (if any) should be taken to improve it. In this way, individual grumpiness is moderated. A variant of this good practice from other conferences is when reviewers of the same paper can see each other’s reviews (once they have submitted their own), thus introducing peer pressure not to be awful.
- Authors have a right to reply by writing a rebuttal of the review. The rebuttal is taken into account along with a revised meta-review (and potentially revised individual reviews) at a two-day committee meeting when final accept/reject decisions are made.
- All submitting authors are surveyed about their opinions of the reviewing process—yet another chance to raise issues about unfairness or discourtesy that have not been addressed in a rebuttal.
- This point is more about the nature of the conference itself, rather than the reviewing procedures. Because CHI is so interdisciplinary, participants have a wide range of backgrounds from art and design to hardcore engineering. They are therefore exposed to—and may in fact seek out—different perspectives that may make them open to different paradigms as reviewers. Could colleagues from the arts and social sciences be having a civilizing influence on the grumpy computer scientists?
This is a fairly heavyweight process, but if conference organizers adopted even just one more of the practices from points 15, or if journal editors added a courtesy clause to their review instructions, the world would be a slightly better place.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment