Opinion
Letters to the Editor

Software Engineering, Like Electrical Engineering

Posted
  1. Introduction
  2. No Hacker Burnout Here
  3. What to Do About Our Broken Cyberspace
  4. Ordinary Human Movement as False Positive
  5. References
  6. Footnotes
Letters to the Editor

Though I agree with the opening lines of Ivar Jacobson’s and Ed Seidewitz’s article "A New Software Engineering" (Dec. 2014) outlining the "promise of rigorous, disciplined, professional practices," we must also look at "craft" in software engineering if we hope to raise the profession to the status of, say, electrical or chemical engineering. My 34 years as a design engineer at a power utility, IT consultant, and software engineer shows me there is indeed a role for the software engineer in IT. Consider that electricity developed first as a science, then as electrical engineering when designing solutions. Likewise, early electrical lab technicians evolved into today’s electrical fitters and licensed engineers.

The notion of software engineer has existed for less than 30 years and is still evolving from science to craft to engineering discipline. In my father’s day (50 years ago) it was considered a professional necessity for all engineering students to spend time "on the tools," so they would gain an appreciation of practical limitations when designing solutions. Moving from craft to engineering science is likewise important for establishing software engineering as a professional discipline in its own right.

I disagree with Jacobson’s and Seidewitz’s notion that a "…new software engineering built on the experience of software craftsmen, capturing their understanding in a foundation that can then be used to educate and support a new generation of practitioners. Because craftsmanship is really all about the practitioner, and the whole point of an engineering theory is to support practitioners." When pursuing my master’s of applied science in IT 15 years ago, I included a major in software engineering based on a software engineering course at Carnegie Mellon University covering state analysis of safety-critical systems using three different techniques.

Modern craft methods like Agile software development help produce non-trivial software solutions. But I have encountered a number of such solutions that rely on the chosen framework to handle scalability, assuming that adding more computing power is able to overcome performance and user response-time limitations when scaling the software for a cloud environment with perhaps tens of thousands of concurrent users.

In the same way electrical engineers are not called in to design the wiring system for an individual residence, many software applications do not need the services of a software engineer. The main benefit of a software engineer is the engineer’s ability to understand a complete computing platform and its interaction with infrastructure, users, and other systems, then design a software architecture to optimize the solution’s performance in that environment or select an appropriate platform for developing such a solution.

Software engineers with appropriate tertiary qualifications deserve a place in IT. However, given the many tools available for developing software, the instances where a software engineer is able to add real benefit to a project may not be as numerous as in other more well-established engineering disciplines.

Ross Anderson, Melbourne, Australia

Back to Top

No Hacker Burnout Here

I disagree strongly with Erik Meijer’s and Vikram Kapoor’s article "The Responsive Enterprise: Embracing the Hacker Way" (Dec. 2014) saying developers "burn out by the time they reach their mid-30s." Maybe it is true that "some" or even perhaps "many" of us stop hacking at around that age. But the generalization is absolutely false as stated.

Some hackers do burn out and some do not. This means the proposition is erroneous, if not clearly offensive to the admitted minority still hacking away. I myself retired in 2013 at 75. And yes, I was the oldest hacker on my team and the only one born in the U.S., out of nine developers. Meijer himself is likely no spring chicken, given that he contributed to Visual Basic, yet he is likewise still hacking away. At the moment, I am just wrapping up a highly paid contract; a former client called me out of retirement. Granted, these are just two cases. Nonetheless, Meijer’s and Kapoor’s generalization is therefore false; it takes only one exception.

I do agree with them that we hackers (of any age) should be well-compensated. Should either of their companies require my services, my rate is $950 per day. If I am needed in summer—August to September—I will gladly pay my own expenses to any location in continental Europe. I ride a motorcycle through the Alps every year and would be happy to take a short break from touring to roll out some code; just name the language/platform/objective.

As to the other ideas in the article—old (closed-loop system) and new (high pay for developers)—more research is in order. As we say at Wikipedia, "citation needed." Meanwhile, when we find one unsubstantiated pronouncement that is blatantly false in an article, what are we to think of those remaining?

Keith Davis, San Francisco, CA

Back to Top

What to Do About Our Broken Cyberspace

Cyberspace has become an instrument of universal mass surveillance and intrusion threatening everyone’s creativity and freedom of expression. Intelligence services of the most powerful countries gobble up most of the world’s long-distance communications traffic and are able to hack into almost any cellphone, personal computer, and data center to seize information. Preparations are escalating for preemptive cyberwar because a massive attack could instantly shut down almost everything.1 Failure to secure endpoints—cellphones, computers, data centers—and securely encrypt communications end-to-end has turned cyberspace into an active war zone with sporadic attacks.

Methods I describe here can, however, reduce the danger of preemptive cyberwar and make mass seizure of the content of citizens’ private information practically infeasible, even for the most technically sophisticated intelligence agencies. Authentication businesses, incorporated in different countries, could publish independent directories of public keys that can then be cross-referenced with other personal and corporate directories. Moreover, hardware that can be verified by independent parties as operating according to formal specifications has been developed that can make mass breakins using operating system vulnerabilities practically infeasible.2 Security can be further enhanced through interactive biometrics (instead of passwords) for continuous authentication and through interactive incremental revelation of information so large amounts of it cannot be stolen in one go. The result would be strong, publicly evaluated cryptography embedded in independently verified hardware endpoints to produce systems that are dramatically more secure than current ones.

FBI Director James Comey has proposed compelling U.S. companies to install backdoors in every cellphone and personal computer, as well as in other network-enabled products or services, so the U.S. government can (with authorization of U.S. courts) hack in undetected. This proposal would actually increase the danger of cyberwar and decrease the competitiveness of almost all U.S. industry due to the emerging Internet of Things, which will soon include almost everything, thus enabling mass surveillance of citizens’ private information. Comey’s proposal has already increased mistrust by foreign governments and citizens alike, with the result that future exports of U.S. companies will have to be certified by corporate officers and verified by independent third parties not to have backdoors available to the U.S. government.

Following some inevitable next major terror attack, the U.S. government will likely be granted bulk access to all private information in data centers of U.S. companies. Consequently, creating a more decentralized cyberspace is fundamental to preserving creativity and freedom of expression worldwide. Statistical procedures running in data centers are used to try to find correlations in vast amounts of inconsistent information. An alternative method that can be used on citizens’ cellphones and personal computers has been developed to robustly process inconsistent information2 thereby facilitating new business implementations that are more decentralized—and much more secure.

Carl Hewitt, Palo Alto, CA

Back to Top

Ordinary Human Movement as False Positive

It might indeed prove difficult to train software to detect suspicious or threatening movements based on context alone, as in Chris Edwards’s news story "Decoding the Language of Human Movement" (Dec. 2014). Such difficulty also makes me wonder if a surveillance software system trained to detect suspicious activity could view such movement as "strange" and "suspicious," given a particular location and time, and automatically trigger a security alert. For instance, I was at a bus stop the other day and a fellow rider started doing yoga-like stretching exercises to pass the time while waiting for the bus. Projecting a bit, could we end up where ordinary people like the yoga person would be compelled to move about in public like stiff robots for fear of triggering a false positive?

Eduardo Coll, Minneapolis, MN

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More