British philosopher and social theorist Jeremy Bentham would have wholeheartedly endorsed many of the accountability mechanisms Stefan Bechtold and Adrian Perrig outlined in their Viewpoint "Accountability in Future Internet Architectures" (Sept. 2014). It reminded me of Bentham's Panopticon (late 18th century), a prison where the prisoners would be motivated to behave in a more civilized manner by being made to think they were always under surveillance. Likewise, Bechtold and Perrig took the view that network users being tracked and made accountable for their actions would improve the Internet.
I am certain the majority of governments today would endorse this architecture, in which it would be possible to trace all Internet Protocol communication packets from source to destination and guarantee everyone is using the network responsibly. Indeed, many governments already pursue such a goal.
On the other hand, I am concerned the pervasive monitoring already present in today's global Internet without these technical aids might not be in society's best interests. I have been working with U.S. State Dept. sponsorship aiding a user group of journalists and democracy advocates in African countries, many with authoritarian tendencies. I am developing anonymization tools and training participants to use them. In many of the countries, accountability for accessing information considered innocuous in the West has dire consequences. Many of those lacking human rights protections found in Western democracies indeed use the technology produced in the West.
I dislike the idea of making people accountable for the information they consume, which would be a by-product of the ideas Bechtold and Perrig proposed.
Richard R. Brooks, Clemson, SC
Like Brooks, we strongly support privacy and anonymity for users. However, we also strongly disagree with an interpretation of our Viewpoint that says we envision a future Internet architecture that tracks users. Our aim was (and is) more discerning. As we pointed out, it is sometimes possible to achieve both privacy and accountability, whereby users maintain their privacy and become accountable only if they violate some policy as by, say, perpetrating an attack. Moreover, anonymity can be achieved through an overlay network, even if the underlying network is accountable. We also highlighted the research challenges involved in balancing accountability, privacy, anonymity, political freedom, and other values. Brooks seems to have missed this core point.
Stefan Bechtold and Adrian Perrig, Zurich, Switzerland
Cormac Herley's article "Security, Cybercrime, and Scale" (Sept. 2014) focused on logical analysis of narrowly defined financial cybercrimes gainfully performed by untrusted remote perpetrators, not by embezzlers. The objective Herley specified in this logical model is improved security to reduce the risk of rational financially motivated untrusted perpetrators able to carry out all possible scaled attacks.
Having interviewed more than 200 cybercrime perpetrators over the past 40 years, I suggest reality is quite different. First, perpetrators possess only partial knowledge. They also cause errors that change their objectives, take less financial assets than are available, do not necessarily consider cost, perform copy-cat attacks, and act under many other personal irrational conditions and circumstances that were always present in all of the cases I studied.
I dislike the idea of making people accountable for the information they consume.
Here is my threat model: Alice knows she cannot be sufficiently secure from attacks by Mallory and thus seeks to avoid negligence after Mallory (inevitably) attacks, successfully or not.
Herley correctly noted the limitations of successful risk reduction, but a different objective and strategy are more desirable for my model. The objective I advocate is security diligence, rather than risk reduction. It is a safer, more easily obtained and measured objective for the enterprise, more likely meets insurance requirements, and reduces a broader range of risk reduction that includes possibly reducing the risk of negligence on the part of the victim enterprise and the stakeholders within it. I have found (in practice) this is often more important than financial loss.
The diligence strategy is to implement security controls by engaging in benchmark studies, using standards, compliance, contracts, audits, good practices, available products, cost control, experts' opinions, and experimentation. The tough high-cost decisions are made by management fiat, not necessarily by risk reduction.
Donn B. Parker, Los Altos, CA
The Milestones section "Computer Science Awards, Appointments" (Sept. 2014) reported Jack Dongarra of the University of Tennessee, Knoxville, as the recipient of the ACM-IEEE Computer Society Ken Kennedy Award. Allow us to clarify. Dongarra was the 2013 recipient. Charles E. Leiserson of the Massachusetts Institute of Technology was recently named the recipient of the award for 2014 (see page 14).
Communications welcomes your opinion. To submit a Letter to the Editor, please limit yourself to 500 words or less, and send to firstname.lastname@example.org.
©2014 ACM 0001-0782/14/11
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.