Opinion
Architecture and Hardware Forum

Forum

Posted
  1. Defense In Depth Gets the Worm
  2. Put Cognitive Modes in CS and Its Curricula
  3. Don't Blame WAP
  4. Market Share vs. Peer Approval in Open Source
  5. Author

Hal Berghel failed to account for the truly pernicious nature of the W32/Blaster worm in his "Digital Village" column (Dec. 2003). As a result, his conclusion that "Eternal vigilance is the best defense against malware," while unexceptionable, misses the more important lesson Blaster taught us: Defense in depth, not just perimeter defense, is necessary for the security of networks.

Though true that closing port 135 at the firewall is good practice, it assumes that most Blaster infections came through firewalls. In fact, the worm could get into a network through many paths. And once into a "crunchy on the outside, chewy on the inside" network protected only by a perimeter defense, it had the run of the place. The problem was exacerbated by the fact that port 135 has to be open on any machine functioning as just about any sort of Windows server, since most services depend on it. One would have hoped that Microsoft would have taken extra special care to ferret out vulnerabilities in code that listens on port 135 because of this obvious vulnerability, but the code in question, which apparently dates back to at least Windows NT 4.0, had been through Microsoft’s much-publicized internal audit without being caught.

In my case, Blaster infected two unpatched machines on my home network after tunneling through my firewall on a virtual private network link to an infected corporate network. I now back up the hardware firewall at the gateway with a software firewall on each computer.

I share the blame. I didn’t get those machines patched in time. But it turns out two additional vulnerabilities in the same Windows Remote Procedure Call code were not patched until several weeks later, so patches are no panacea either.

Steve Wildstrom
Washington, D.C.

Author Responds:
Eternal vigilance in no way excludes defense in depth. I don’t know how a security-conscious IT practitioner could be vigilant in any meaningful sense without defending in depth. It also wasn’t the point of the column. As I explained, "My focus … is on the two entries in the table that started and ended the week [of Aug. 11, 2003]: W32/Blaster and SoBig." However, since defense in depth has been brought up, let me share a few observations about Wildstrom’s comments.

First, Windows uses port 135 for specific network services, including RPC (TCP) and Windows Messenger (UDP), not "most services," as mentioned. The "best practice" for the Netbios/SMB port clusters, as reported in Cert Advisory CA-2003-19 (www.cert.org/advisories/CA-2003-19.html), requires blocking these ports to Internet access. Any attempt to avoid future RPC vulnerabilities, thus allowing open access to these ports, is destined to fail. This is not to ignore the vulnerabilities associated with telnet, SSH, ftp, rlongin, portmap/rpcbind, NFS, Xwindows, IRC, SMTP, POP, IMAP, time, and ports below 20 on TCP and UDP. (Our research center provides a ports database at ccr.i2.nscee.edu/port/.)

Wildstrom’s recommendation of defense in depth is on the mark, but his solution to his problem falls considerably short of his own avowed goal. It places a host-centric software firewall behind a gateway hardware firewall and is thus only two layers deep.

No gateway firewall can withstand a serious distributed denial-of-service attack, so defense in depth would call for at least a border router to shield the slower firewall. But border routers are themselves susceptible to normal traffic flooding, including ping floods, so they, too, need protection, perhaps from an out-of-band network management system and intrusion-detection system. That’s an additional two layers on the Internet side of the workstation. Inside the workstation the host-centric firewall needs to be complemented with log analysis and alert software, file integrity validation, and cryptography. That’s three more layers on the workstation side.

So, for serious defense in depth, the starting point is Wildstrom’s initial two layers, plus two more on the Net, plus three more on the workstation, or seven layers altogether—even though they still don’t get us to our organization’s DMZ.

Hal Berghel
Las Vegas, NV

Back to Top

Put Cognitive Modes in CS and Its Curricula

There’s another way to explain the structure of the computing field beside the one outlined by Peter J. Denning in his "The Profession of IT" column (Nov. 2003). First, it should emphasize cognitive modes—intention, definition, organization, expression, processing, evaluation, and recall—which together are a less confusing way to describe the field and a better model for a CS curriculum. I therefore propose the following mutually supportive curriculum categories and the material they must address:

For applications (project management), philosophical distinctions, programming, control structures, and heuristics;

For data (computational objects), data types, languages, naming, referencing, description, and object definition;

For organization (computational structure), modeling, hierarchies, logic, comparability, collaborative frameworks, mappings, and networks;

For communications (computational mediation), input and output, interfaces, translation, filtering, security, and multiple entities cooperating toward a common result (collaboration);

For processing (computational mechanics), algorithms, automata, dynamics, simulation, transmission from one point to another, traffic management, and execution efficiencies;

For outcomes (computational assessment), monitoring, measurement, validation, goal-based testing, and end-to-end error checking; and

For recall (knowledge management), storing and retrieving information, searching, adaptive systems, system administration, and evolution.

Charles Burnette
Philadelphia

Peter J. Denning sketched a principles-based portrait of computing covering mechanics, design, practices, and core technologies. Left out, however, was creativity and critical skills training. CS students whose courses are characterized by drill and practice must also learn critical thinking skills to appreciate why some laws and recurrences govern computing operations, as well as how certain computing methods are best implemented and how to sense when a new standard or convention is about to emerge.

Less important is which philosophy, representation, or approach is adopted in CS curricula. The goal must always be to help students become computer scientists and engineers who are not only knowledgeable but able to think critically and creatively as well.

Jiming Liu
Hong Kong

Author Responds:
In researching the great principles proposal, I weighed a number of ways to identify and group the principles of the field. I declined an approach like Burnette’s because computing’s historical development is not explained clearly enough through cognitive categories.

Moreover, the cognitive interpretation is recent. The earliest computers were built to automate tedious calculations, such as arithmetic tables (Babbage, 1830), code breaking (U.K., 1940), and ballistics (U.S. Army, 1945). Serious speculation about computers imitating human cognition did not begin until 1950, and AI did not begin as a formal field of study until the late 1950s. Cognitive science, which studies the connections among mind, brain, and computation, evolved from this foundation in the 1960s.

I sought a framework distinguishing how computations work (mechanics), how we organize them (design), and how we build and evaluate them (practices). I do not see how to do this using Burnette’s categories.

Liu wants the framework to include critical and creative thinking. Recognizing that computer scientists value critical and creative thinking in practice, I included innovation as part of the framework. So it’s already there.

Peter J. Denning
Monterey, CA

Back to Top

Don’t Blame WAP

Xu Yan compared WAP and NTT DoCoMo’s iMode in "Mobile Data Communications in China" (Dec. 2003), repeating several common misunderstandings about these technologies. Though the Wireless Application Protocol (WAP) is far from a commercial success, the technology alone is not to blame. Early WAP was often deployed over Global System for Mobil communications connection-oriented infrastructures associated with problems like call set-up and billing. WAP today is deployed over the General Packet Radio Service (GPRS), which—like iMode’s Personal Handyphone System network—offers a packet-oriented service much better suited to HTTP-based applications.

Moreover, the article overplayed the conflict between the Wireless Markup Language and Compact HTML. Because applications must be designed specifically for smaller devices, HTML-based content cannot be reused one-to-one. The effort to create good WAP applications is no greater than the effort to create good iMode applications.

Even during the mobile Internet bubble several years ago, WAP could not deliver in light of excessive expectations, poor applications, and missing GPRS infrastructure—none an inherent problem of WAP itself. Incidentally, the uptake of iMode in various European markets indicates the technology is not the only issue; user sociology and application quality play far greater roles.

Carl Binding
Rueschlikon, Switzerland

Back to Top

Market Share vs. Peer Approval in Open Source

Though the headline "A Sociopolitical Look at Open Source" of Robert L. Glass’s "Practical Programmer" column (Nov. 2003) led me to expect a thoughtful exploration of the social and political milieu of the open source movement, it fell short of the mark. Its comparison of the movement to a utopian society and open source contributors to its idealistic members working for communal accolades alone overlooked important realities concerning the people and organizations in the open source movement. For example, did IBM donate $40 million worth of software to launch the Eclipse project because it craved the applause of the open source community? More likely, it wanted to increase market share—or prevent Microsoft gaining a monopoly—in the desktop developer market. Open source evangelist Eric Raymond described such enlightened self-interest as but one open source business model in his 1999 essay "The Magic Cauldron," labeling it the Loss Leader/Market Positioner model. Open source software helps gain market position for proprietary software. Eclipse is the foundation of WebSphere Studio Application Developer, which is tightly integrated with WebSphere, which in turn is integrated with IBM business applications and consulting services.

Glass also overlooked the historical motivation of scholars laboring for little more than peer recognition and the personal satisfaction of advancing human knowledge. Today, academic and professional journals pay little or nothing for the articles they publish, yet editors reject many more articles than they accept. Why do people labor for so little? Personal autonomy is the answer.

Scholars are free to pursue their own interests wherever they might lead. Likewise, many developers who enjoy programming but are frustrated by the constraints of the commercial software world are attracted to open source by the personal autonomy it offers. They work on significant projects, gain peer respect, and contribute to the advancement of human knowledge. Undoubtedly, the desire for peer approval is a motivating factor, but self-interest and the desire for personal autonomy cannot be overlooked.

Robert Swarr
New Britain, CT

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More