Opinion
Computing Applications Forum

Forum

Posted
  1. The Folk Art of Information Security Needs an Upgrade
  2. Don't Let Patents Inhibit Software Innovation
  3. Hold the Methodologies; Give Me Tools
  4. Measuring Project Friction and Chaos
  5. Less Is More Code
  6. Author

Tom Slewe and Mark Hoogenboom fairly accurately defined information security as it exists today in "Who Will Rob You on the Digital Highway?" (May 2004). They also inadvertently revealed what an incorrect, inconsistent, and incomplete folk art it is as well.

Information security is akin to the practice of alchemy in ancient times when there were only four known elements: air, water, fire, and earth. Slewe and Hoogenboom covered confidentiality, integrity, and availability (CIA), the well-known elements of information security as they are used in many organizations’ security policies. However, CIA falls short in terms of validity, completeness, and precision. The generally accepted definition of confidentiality they provided omitted the need for preserving information that is not confidential (such as the content of free and licensed software). They also incorrectly defined integrity as including correctness, validity, and authenticity. And their definition of availability omitted the need for information to be in useful form.

Dictionaries define integrity of information as complete and in good condition. But information may have integrity and be false or be correct but lack integrity. This distinction is important because security measures for information integrity and for information authenticity are not the same.

Information security involves six elements: confidentiality, possession, integrity, authenticity, availability, and utility. But many experts reject the need for all six as too complicated, so information security persists as a folk art.

Slewe and Hoogenboom identified only three kinds of attacks: man-in-the-middle, woman-in-the-house (an unnecessary term for impersonation), and Trojan horse. This definition should please cybercriminals, because it means we’re overlooking the many other important attacks that must be prevented, including false data entry, denial of service, release of worms, repudiation, endangerment, and deception.

It is generally accepted that information security targets risk reduction through assessment of probabilistic risk and pursuit of risk management. This objective is wrong and contributes to the poor formulation of the security folk art. Risk of rare loss incidents (as a function of future frequency and results) is neither measurable nor controllable and therefore not manageable for a specific organization, though it is useful in insurance actuarial calculations involving the loss experienced by many customers.

The future frequencies and effects of incidents for specific organizations in specific security settings are determined by unknown enemies, not by victims. These enemies attack at unknown times using unknown methods and resources for unknown purposes against known and unknown vulnerabilities with unknown consequences. As far as I know, the validity of security risk assessments has never been demonstrated or reported, yet most security experts, laws, standards, and guidelines tell us to perform them. What a poor discipline we have, based on unproved and obviously faulty methods.

I have performed more than 250 information security reviews over the past 35 years without resorting to probabilistic risk assessment, basing them on a sound model of security concepts discussed in my book Fighting Computer Crime, A New Framework for Protecting Information, John Wiley & Sons, 1998. Using threat and vulnerability analysis and due diligence, I avoid negligence by benchmarking security measures against other organizations under similar circumstances and by meeting the requirements of legislation, standards, and good practice.

Donn B. Parker
Los Altos, CA

Back to Top

Don’t Let Patents Inhibit Software Innovation

In the field of software patents, Pamela Samuelson’s "Legally Speaking" column ("Why Reform the U.S. Patent System?," June 2004) missed an important point: Patents don’t necessarily accelerate the rate of innovation and can even slow it down. I strongly object to patents on such "nonmaterial" products as software. Innovation becomes increasingly expensive because innovators must inform themselves of all existing patents in the field, negotiate licenses with patent holders in the field, and pay for licenses.

The fight against software patents is apparently lost in the U.S., but the Foundation for a Free Information Infrastructure (swpat.ffii.org), along with many affected people throughout Europe, are still trying to find a better way.

Dani Oderbolz
Berlin, Germany

Back to Top

Hold the Methodologies; Give Me Tools

As Robert L. Glass discussed in his "Practical Programmer" column ("Matching Methodology to Problem Domain," May 2004), IS practitioners need tools, not necessarily another methodology.

Some might argue that, for example, agile methods are brittle in the sense you have to use all their components or the entire thing is likely to fall apart. I want tools that function independently, so using even one is better than not using it in development efforts that call for using such a tool.

I don’t need UML’s deadweight tonnage but do sometimes need its diagrams. I want a lightweight tool that maintains the diagrams. I don’t need the rigidity of the waterfall model. I want a tool for capturing user needs and mapping them to design elements. I don’t need the religious fervor of agile methods. I want tools and techniques for the rapid review of software. I also want testing frameworks.

I’m unable to collect enough data on small projects to make use of software metrics, but I’d still love a canned metrics-gathering tool, along with the smarts to analyze the metrics.

Kurt Guntheroth
Seattle, WA

I was pleased that Robert L. Glass made the point in his May 2004 column that "it is not enough to propose a new methodology without discussing when its use might be appropriate."

One reason for the number of proposed methodology panaceas may be the close relationships between vendors and methods. Vendors are more interested in selling products than in such matters as validity and applicability.

The methodologist Michael Jackson made the same point about panacea claims, writing: "It’s a good rule of thumb that the value of a method is inversely proportional to its generality." A method for solving all problems gives you little help with any particular problem (see his book Software Requirements and Specifications, Addison-Wesley, 1995).

Jackson is one of the few methodologists to carefully delimit the scope of his own methods; for example, JSP works for simple programs whose inputs and outputs are regular expressions, while JSD works well for dynamic information systems.

Needed is a discipline for analyzing and classifying problems—the subject of Jackson’s research over the past decade, culminating in his book Problem Frames (Addison-Wesley, 2001).

I am disappointed but also curious about why Glass didn’t acknowledge Jackson’s contributions, especially as Jackson embodies the combination of practitioner and academic influences Glass identifies as qualifications for being a methodologist.

Nicholas Ourusoff
New London, NH

I’m as much an engineer as a scientist. I study the theory but then have to put it into practice. Since I work in a support department dealing with everyone else’s mistakes, I have some idea what works and what doesn’t. I find that there’s too much thinking that each individual methodology is good for all domains. Robert L. Glass’s May 2004 column recommended there should be a mapping between methodologies and the types of problems for which they are best suited. However, a methodology is only as productive as an organization is willing to make it.

Managers must ask: How does the company philosophy facilitate (or inhibit) the methodology?; How well do the developers’ attitudes match the methodology?; How much experience does the organization have with the processes involved in the methodology; How far is the organization willing to stretch its philosophy/culture to facilitate a given methodology (point by point for each item in the methodology)?; and How much extra time/effort are you willing to put into stretching that philosophy/culture? Then there’s the problem domain.

I also want to know: What format is data normally stored in?; Are data entities the norm?; How is raw data separated from processed data?; and What is the natural way to think about the problem?: Is it a process, a series of contractual interactions, or utter chaos that must be tamed?

I find life is much easier if you stick with the basics, no matter what methodology you use: Keep it readable, because it will be modified; Keep it simple, to be less of a burden on the people who will be modifying it; Understand what it’s supposed to do and how it’s supposed to do it; and Don’t be ashamed to look at what it actually does and how it compares to what it’s supposed to do. Computers do as they’re told; they lack the brains to know what you want them to do, so make life easy on their brainless hardware.

Michael J. Lewchuk
Edmonton, Canada

Back to Top

Measuring Project Friction and Chaos

Phillip G. Armour’s "The Business of Software" column ("Real Work, Necessary Friction, Optional Chaos," June 2004) was illuminating on the reality of software project estimation and the dilemma of software project scheduling. Bravo on making clear the problems of this process, but I wonder if it is possible to quantify any of the issues he described. Is there a way to determine the magnitude of the friction or of the chaos? Do any studies suggest the magnitude of these numbers? Could a quantitative formula be devised and tested on real-world projects?

One key element not addressed was individual programmer productivity. In a study I conducted some years ago to quantify the ability of individual programmers, I found an enormous range of productivity over a programming staff of 200. Therefore, any accurate estimate of program schedule must account for the skill of the individuals assigned to the work. The possible variation in skill is so great as to potentially swamp out all other factors in the analysis.

Ed Bryan
Pacific Palisades, CA

Back to Top

Less Is More Code

Regarding Michael J. Wax’s Forum comment (May 2004) on the subject of counting lines of code (LOC), perhaps we should all ponder what the French mathematician Blaise Pascal said almost 400 years ago: "Je n’ai fait celle-ci plus longue que d’habitude parce que je n’ai pas eu le loisir de la faire plus courte" ("I have made this letter longer than usual, only because I have not had the time to make it shorter").

The relationship between LOC and quality, as it may be interpreted for computer programs, is certainly nonlinear. On average, visible LOC decrease as a program matures or are likely to decrease if there is no related increase in functionality. At least one school of thought feels that dense code (fewer LOC) is better than less-dense code. Dense code is usually much more difficult to understand or to maintain.

By the way, when calculating LOC as a metric, should lines of commentary and deleted lines be included? And should programmers be rewarded on the basis of total number of lines written or of total number of useful lines?

Ian Perry
Brussels, Belgium

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More