Opinion
Computing Applications Forum

Forum

Posted
  1. Don't Target Me, Unless I Say You Can
  2. Avoid the Trap of Unrealistic Expectations
  3. Fewer Degrees of Separation Means More Precision in Software Projects
  4. Author

Despite touching on privacy concerns, Alexander Pons’s article "Biometric Marketing: Targeting the Online Consumer" (Aug. 2006) seemed to assume it is not only appropriate but desirable for e-tailers to know who is visiting their Web sites and what they are doing there. I can see why it might be desirable from the merchant’s point of view, but for me personally I don’t want my every movement on a Web site to be identifiable and tracked by who knows who for who knows what purpose.

Consider a physical analogy. You walk into a store where a greeter asks you your name, writes it down, then follows you around, taking notes about the aisles you visit, the products you examine, and the items you put in your cart. This person periodically tells you about a product the store thinks you might want—the kind of targeted advertising Pons feels would "produce the greatest benefits for both their [the marketers’] companies and their consumers." This person might even want to know which other stores you’ve been in recently.

This kind of personal attention would make me nervous. At least in a physical store it would be visible, but biometric tracking is invisible in online stores. I would be less concerned if the tracking were done exclusively on an opt-in basis. I value my privacy. I don’t wish to give merchants any more information than they need to sell me the product(s) I might be buying at the moment. Others, as Pons suggested, might wish to provide personal information or even biometric information that is unique but anonymous.

I would not patronize a store (online or otherwise) that tracked such information without my having previously and explicitly opted-in. Consider me one online consumer who does not wish to be targeted.

Karl Wiegers
Happy Valley, OR

Back to Top

Avoid the Trap of Unrealistic Expectations

Descriptions of security products often produce unrealistic user expectations, sometimes doing less than they should to avoid a misunderstanding. We tried to not fall into this trap when describing Polaris—our virus-safe computing environment for Windows XP—in our article "Polaris: Virus-Safe Computing for Windows XP" (Sept. 2006). Unfortunately, some readers might have misinterpreted Diane Crawford’s "Editorial Pointers" (in the same issue) describing Polaris as "an HP creation that protects computers and their applications against viruses and worms …"

A program launched under any of the operating systems in widespread use today (such as Windows, Linux, and Mac OS) typically runs with all the permissions of the user who launched it. Malicious code could use these permissions to take actions the user doesn’t want or like. A sidebar in the article explained that we use the term "virus" to refer to this type of malware and the term "worm" to refer to malicious code that runs in processes not necessarily started by the user.

Polaris limits the damage a virus might do by running programs in accounts with few permissions. All bets are off, however, should the virus exploit a flaw in the operating system to gain additional privileges.

Polaris also does little to protect against worms, which typically attack processes running with system or root privileges. In spite of these limitations, Polaris provides real protection against an entire class of malware, just not as much as implied in Crawford’s summary.

Marc Stiegler
Alan H. Karp
Ka-Ping Yee
Tyler Close
Mark S. Miller
Palo Alto, CA

Back to Top

Fewer Degrees of Separation Means More Precision in Software Projects

The principal reason the software development enterprise seems irrational is that it is a social human activity. The mathematical principal of chaos may even play a role in our attempt to rationalize and industrialize software development. If it does, then mistakes are inevitable. To improve, as Phillip G. Armour discussed in his "The Business of Software" column ("Software: Hard Data," Sept. 2006), we must do one of two things: be more precise in our measurement systems, so chaos is less significant, or decrease the size of our projects and teams, so error cannot accumulate to a significant degree.

Software is developed iteratively through a closed-loop feedback system. We do some work on a prototype, inspect the result and, using some measure, decide whether or not it’s finished. If it’s not, we repeat the procedure on the new prototype, ad infinitum. Our standard of suitability is generally whether it’s "good enough." Suppose our means of measuring good enough, while accurate enough, is not precise enough. We’ll probably make the right decision for each individual step until the accumulated error exceeds the precision of our ability to measure. We cannot expect an error in a subsequent step to offset an error in a prior step. And no matter how close to the ideal we are, our error prospects won’t improve.

Suppose we have some work product x with an initial value of 0.5 and some work that is recursively applied to x, yielding x’, such that x’ = x2 2 2. Repeat this procedure 30 times on a spreadsheet calculator until, at the end of the 30th iteration, x = 1.86080156947415. Experiment with initial values of x, such that it is arbitrarily close to 0.5 but not 0.5. Using an initial value of x =0.50001, the 30th iteration yields x’ = 21.93935374520355, which is utterly wrong by most accounts. Noise in the initial value has a profound and disproportionate effect on the ultimate outcome. However, increasing the precision of the initial value by orders of magnitude still does not result in a proportional improvement in error performance.

This goes a long way toward explaining why small projects and, more important, small teams working on large projects have a greater prospect of success than large teams in similar situations, regardless of skill. The fewer degrees of separation between an initial state and a final state, the more likely we are to achieve the final state with any degree of precision.

Jeffrey A. Rosenwald
Frederick, MD

The characteristics of a typical software development project surveyed by Quantitative Software Management (QSM) and cited by Phillip G. Armour (Sept. 2006) made me suspect QSM’s results and interpretations almost immediately. The 9,200 lines of code (LOC) / 58 staff months = ~158 LOC per month or ~1 LOC per hour for a COBOL project sounded too low to reflect reality, though I suspect these numbers also include time for management, design, debugging, and assorted packaging details. COBOL is an older language that may not be useful for measuring productivity for several reasons. Most notably, more complex applications can be written in other languages; an example is a VB script that takes advantage of the component object model interfaces in Microsoft Excel to generate charts after collecting data from a proprietary database. Using COBOL is less suitable since the charting code must be written, whereas Excel provides built-in charting support.

Smaller teams would show better performance on any project because time to code is always much less than time for design overhead.

Compiler tools from Microsoft (and other sources) also help generate code. Just using one influences the number of resulting LOC, yet Armour didn’t mention related measurement methods in the context of QSM’s survey. As these tools improve, the productivity measure would suggest that fewer LOC will be written, even as more LOC are generated.

Consider that a task takes a programmer-month to complete, involves a well-known way to write a script, and assumes consequences (such as debug time and poor quality) that must be corrected later. Now introduce a tool that allows a user to describe the task in a different language, catch all problems at compile time, and debug problems in one or two programmer-days.

Instead of coding, the coder now describes the problem set to the tool. This paradigm shift alone has increased productivity 15 to 30 times, even as the numbers of LOC are significantly reduced. No apples-to-apples comparison is possible. A QSM analysis needs something to compare against and thus would make no sense, but the increase in productivity is notable.

Meanwhile, choosing the wrong language for a project also affects productivity. If, for example, a software organization forces COBOL on a project that would be better off in RPG, C/C++, or Java, it should recognize that it is assuming a big risk.

For these reasons, Armour didn’t provide a really good basis for the column’s conclusion.

Jim Chorn
Portland, OR

To respond properly to Phillip G. Armour’s invitation to comment (Sept. 2006), I would first want more information. In particular, some of the variance in project outcomes and changes over time he cited from the QSM survey may be related to the degree of novelty in a project, the degree to which it integrated or utilized other software applications, and the interactions among its components or software modules and those in the information system it is meant to be part of.

We can imagine that projects implementing a system for which there is no existing precedent would produce different results from those implementing a system that may be new to the organization but not to the industry. By combining projects with a range of novelty, we may be doing a disservice to our own measurements.

Integration and software reuse involve software complexity and measurement. If I write an application that utilizes multiple APIs from multiple vendors, the amount of code I write may be relatively small (fewer LOC), even if the complexity of the application is commensurate with much larger applications. Moreover, the functionality of the new system that includes my LOC (plus existing applications) may be high.

Is the proper measurement strictly my LOC? No. Should project measurement include both new and reused LOC, say, my two lines to the millions in SAP? I doubt it. However, measuring projects that combine different levels of complexity and software reuse in the form of other APIs may do developers a disservice.

The issue of software system interaction is similar to the issue of integration and software reuse but at a different level of observation. Software projects are often implemented in contexts in which they must interoperate with preexisting software systems that impose constraints on the new project, as well as provide a source of unanticipated interaction effects. Comparing the development of a silo application with an application intended to interoperate may yield poor insight and weak theory. As a result, we may look for solutions in all the wrong places.

Comparing average project performance may make sense for projects that follow a normal distribution and are largely unaffected by contextual circumstances. But projects may or may not follow such a distribution. Without better-defined constructs and a richer set of controls, I am not convinced that research results will lead to valid or meaningful project results.

The QSM results Armour discussed did not account for the degree to which a project was exploratory (novel) or exploitative. They also did not account for differences in the delivered system’s size and complexity or the nature of the larger system of which it will be but one component.

David Dreyfus
Boston

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More