The theft scenario explored in "The Illusion of Security" (Mar. 2008) by David Wright et al. was realistic, with one exception, though it didn’t detract from the article’s conclusions. Wright et al. wrote: "A third driver, not so dissimilar from the first, is that the data thieves are also impelled by the profit motive." The thieves in this case were employees who disappeared one day and some time later turned up in Costa Rica.
I realize the scenario was abbreviated, with many details omitted, but after having interviewed more than 200 computer criminals in my own studies (see Crime by Computer, Scribners, 1976, and Fighting Computer Crime, Scribners, 1983), I conclude that few computer criminals are indeed motivated purely by profit. Employees become criminals during employment mostly to solve personal problems that may involve money, sabotage, or espionage. They are often motivated by debt, relationships gone bad with other employees or spouses, personal dissatisfaction, or an attempt to hide poor or unethical business decisions. If the thieves cited in the article were motivated by profit alone, staying with the company and surreptitiously selling its products and services under the table and engaging in accounts-receivable or -payable fraud, their behavior would likely be less dangerous and obviate the need to flee. On the other hand, they may have been in touch from the start with a buyer interested in the whole database who initiated the theft. However, I have found that collusion between two perpetrators is rare and among three rarer still when IT is involved. It usually takes only a single person with the proper skills, knowledge, resources, authority, motives, and objectives.
Donn B. Parker
Los Altos, CA
Collaborative Solutions for Disparate Problems
Peter Denning’s and Peter Yaholkovsky’s "The Profession of IT" column "Getting to ‘We’" (Apr. 2008) was especially interesting in terms of open-source development. Although both authority and competition have a place in vibrant open-source communities, it is surprising how often collaborative solutions satisfy disparate needs. For example, both SMP scalability and real-time response are sometimes improved by changing the kernel’s synchronization design.
I cannot say whether the five stages of collaboration outlined in the column apply directly to open source but attest to the effectiveness of the third stage: "listen to and learn all perspectives." I’m intrigued by how "declare" and "connect" might by undertaken by a new generation that has grown up with the Internet.
Paul E. McKenney
Beaverton, OR
Disk Increases Size of Memory-Limited Searches
I agree wholeheartedly with Daniel Kunkle’s and Gene Cooperman’s "Viewpoint" "Solving Rubik’s Cube: Disk Is the New RAM" (Apr. 2008). In fact, my co-authors and I have been developing disk-based search algorithms and using them to solve combinatorial problems since 2002. For example, the well-known Towers of Hanoi problem is more than 125 years old. But with four or more pegs, the optimal solution length is not known in general. Using more than 2TB of disk space, we’ve completed breadth-first searches of the four-peg Towers of Hanoi problem with up to 22 discs, a problem with almost 17.6 trillion states. We’ve also performed disk-based heuristic searches of the problem, confirming a 1941 conjecture due to J.S. Frame and B.M. Stewart about the optimal solutions to these problems for up to 31 discs.
As another example, the sliding-tile Fifteen Puzzle was by many accounts as famous in the 1880s as Rubik’s Cube in the 1980s, as documented by Jerry Slocum and Dic Sonneveld in their book The 15 Puzzle: How It Drove the World Crazy. In 2005, using 1.5TB of disk space, we completed a breadth-first search of the 4×4 Fifteen Puzzle, with more than 10 trillion states. This verified a result due to Adrian Brungger et al. that the most difficult problem instances take 80 moves to solve. We then found that the most difficult instance of the 2×8 Fifteen Puzzle takes 140 moves to solve.
More recently, we considered the pancake problem whose only claim to fame is that it is the subject of a technical paper co-authored by Bill Gates ("Bounds for Sorting By Prefix Traversal" in 1979). In it, we want to sort a stack of pancakes of different sizes using a spatula that flips over the top K pancakes as a group. In another version of the problem, one side of each pancake is burned; in addition to sorting them by size, we now want all burned sides face down. Using 3TB of disk storage, we found the maximum number of moves needed to solve the 14 and 15 unburned pancake problems and the 11 and 12 burned pancake problems. The 15 unburned pancake problem has more than 1.3 trillion states; the 12 burned pancake problem has almost two trillion states.
In addition to these challenging toy problems, many of these techniques are applicable to real-world problems as well. In particular, most NP-complete problems are combinatorial in nature, and all known methods for solving them optimally involve searching large numbers of states. I agree with Kunkle and Cooperman that the use of magnetic disk can increase the feasible size of memory-limited searches by several orders of magnitude.
Richard E. Korf
Los Angeles
Author’s Response:
We thank Korf for his complimentary comment. We have long admired his own use of disk-based parallel computation. We look forward to the popularization of the technology and its increased use in real-world problems.
Daniel Kunkle
Gene Cooperman
Boston
No Room for Weak Links in Security in Depth
Hal Berghel’s "Digital Village" column (Apr. 2008) "Faith-Based Security" made valid points and it’s title was somewhat provocative, but I take issue with its treatment of "security in depth." SID has a simple definition: create a path of multiple sequential steps where the sum of the least cost-path represents the optimal effort an attacker would have to expend to achieve the target (nefarious) effect.
The key words are "sequential" and "sum." In terms of simple graph theory, SID forces the attacker to traverse multiple nodes representing independent compromises at cumulative cost across all edges (such as in two-factor authentication). In contrast, a "chain link" defense is only as strong as its weakest link because the attack takes place simultaneously on all nodes, so the compromise of any one of them compromises the entire defense (such as in port scanning). In the case of a physical chain, straining the chain simultaneously places sheering load where each link joins its neighbor. Once the chain breaks anywhere its ability to restrain is gone.
Amazing to me is how many security professionals and managers do not understand the fundamental difference between SID and the chain-link style of protection. Worse, they may think they have a SID architecture when in fact they have only a chain link.
Robert J. DuWors
Bellevue, WA
Forget WEP; Give Me WPA Encryption
The discussion and advice in Alfred Loo’s "The Myths and Truths of Wireless Security" (Feb. 2008) did nothing to dispel the myths, even as it introduced some advice that should not have been included. For example, being told that "Users should turn on the encryption feature in the router" was misleading, as it indicates that any form of encryption will do. However, the encryption method known as Wired Equivalent Privacy, or WEP, is "worse than useless" (www.securityfocus.com/infocus/1824) and "notoriously flawed" (www.mobile-tech-today.com/perl/story/22207.html). Roberto C. Sanchez said it beautifully: "WEP is the equivalent of leaving the bank vault open at night with a sign saying ‘Please stay out’" (unixadmintalk.com/f11/wep-encryption-ndiswrapper-103651/). The column’s advice should have been: Enable Wi-Fi-Protected Access, or WPA, encryption and don’t bother with the rest. No mention was made of fatally flawed encryption methods (it takes seconds to break WEP), indicating, perhaps, that security is beyond the understanding of almost everyone.
There was nothing wrong with the column’s list of user responsibilities, but they are like saying one should check the water, oil, and brake-fluid levels before driving a car. This may be reasonable advice but is likely to be ignored. The car analogy did offer some direction by adapting the standard rules: green lights (or no lights) are OK, orange is a warning, and red is a problem. Manufacturers should thus consider adding green, orange, and red lights to their wireless devices. Orange would mean the device settings have been accessed from outside the private network; clearing the changes would require pressing a physical button on the device. Red would mean an important firmware update is available. Although users can’t be expected to update their firmware, they should be expected to understand that red means "See a technician."
Berend de Boer
Auckland, New Zealand
How to Teach Programming to Novices
Chenglie Hu’s "Viewpoint" "Just Say ‘A Class Defines a Data Type’" (Mar. 2008) addressed issues the South African Education Department has been debating over its curriculum for teaching programming and software-development in the country’s high schools (grades 1012).
Under the curriculum, schools can choose between two programming languages: Java and Delphi. However, the pro-Java and pro-Delphi teachers are divided with regard to object-oriented programming, event-driven programming, use of GUIs, and possible drag-and-drop advantage for Delphi learners. Teaching "objects first" is an approach advocated by most Java teachers and "objects later" an approach advocated by most Delphi teachers. Some Java teachers avoid GUIs and IDEs, preferring a text-based approach. In addition, each side has characterized the other as taking a "procedural approach" rather than an "object-oriented approach."
The two groups are further divided by their views of which language is best for teaching programming to 15-year-olds. Hu suggested the answer lies more in the teaching approach and less in the programming language itself. Many teachers are of the opinion that problem solving should be the focus and that the language is only a tool. However, when teaching how to use that tool, the focus might be more on learning how to use the language than on solving problems. In any case, students must be prepared to answer whether a particular tool should determine the strategy for solving a particular problem.
Should we educators follow Hu’s approach, exposing students to problem-solving paradigms and strategies and preparing them to be able to choose the most appropriate tool (language and/or data structure) to solve the problem at hand? Our decision to remove data representation from the curriculum could turn out to be a mistake if Hu’s view that the study of data types and how data is represented is fundamentally important to learning and teaching programming, unless our teachers make sure to fill in this background knowledge.
Our teachers are also pondering other questions in the context of teaching computer programming to novices: For example, What is the proper role of computing algorithms (such as for sorting), when programmers are able to call a method (someone else has written, like .sort) without understanding how a sorting algorithm works? The students must still know the reasoning behind method calls, as well as the proper role of array data types when databases are used as a data structure.
Carina Labuscagne
Pretoria, South Africa
Join the Discussion (0)
Become a Member or Sign In to Post a Comment