Opinion
Computing Applications Forum

Forum

Posted
  1. Why Information Security Is Still a Folk Art
  2. In Programming, as in Life, Know the Problem Domain
  3. Measure Software Success as Delivered Value to the Customer
  4. How Definitive is the Classical Church-Turing Thesis?
  5. Author

I share the view expressed in Mikko Siponen’s excellent "Technical Opinion" ("Information Security Standards Focus on the Existence of Process, Not Its Content," Aug. 2006) that information security is still a folk art. Siponen correctly pointed out that today’s standards provide little more than safeguard processes, not good-practice specifications or implementation content. Moreover, these safeguards are tricky, context-dependent, and often too complex to be effective against the enemies attacking them.

Whole books have been written about safeguards, including Managing an Information Security and Privacy Awareness and Training Program by Rebecca Herold (Auerbach Publications, 2005) and Information Security Principles and Practice by Mark Stamp (Wiley Interscience, 2006). But I am most concerned about generally accepted information security principles, www.issa.org/gaisp/ gaisp/html) as a security management standard, along with the ISO, SSE-CMM, and ISF Standard of Good Practice. GAISP is a work in progress, currently sponsored by the Information Systems Security Association as a higher level of abstraction, or "broad functional principles," above the other standards, not as a safeguard of content and implementation. It explains the general purpose, objectives, and nature of information security. It is also, however, incomplete, inconsistent, and requires significant effort to be put into acceptable form.

Donn B. Parker
Los Altos, CA

Back to Top

In Programming, as in Life, Know the Problem Domain

The old saw that old jokes are best applies just as well to some of the tools Stephen B. Jenkins discussed in his "Technical Opinion" ("Musings of an `Old School’ Programmer," May 2006), including in-depth knowledge of the problem domain and print statements as the primary debugging tool. I am an even older-timer than Jenkins, having written my first programs in ALGOL in 1965 and in Assembler in 1968. At the time, small machines could run only one program at a time and lacked an operating system. Debugging options were limited to strategically placed print statements or looking at the state of the memory (all 8K words) when the program crashed or the programmer deliberately stopped it. However, the programmer could see the state of things only when the program "stopped," not what had happened getting there, as with prints. The only way the programmer could look at memory was by clicking through each word and examining it in binary.

When I began assembly language programming, applications involved real-time signal processing, including an early implementation of the Fast Fourier Transform. Given that printing out something on a machine without I/O buffering destroyed the timing of the whole process, it was possible to "print" only a single character to memory. I would then manually look at what had been printed. Jenkins correctly pointed out that one enduring advantage of print statements is their independence from language and operating system.

Meanwhile, machine performance was dreadful. The machine on which I programmed the FFT took 25msec to perform a fixed-point single-word addition and much more for multiplication. It had only one level of address indexation and no floating point hardware. Each time I did a floating point operation I also had to make a subroutine call.

The only way we could make things work anywhere near quick enough was to completely rewrite these subroutines, doing the standardization with the help of a lookup table followed by a shift instruction of several places in one go. We standardized only after every second operation, but doing so required sacrificing one bit of accuracy. The FFT also required two levels of indexing, so we had to do the second level by dynamically rewriting the code during execution—a case of better get it right the first time.

One thing that hasn’t changed is the importance of understanding the problem domain; with all the tools available today, programmers should actually be able to concentrate more on this understanding and thus on addressing the problem. I wonder how often big software system problems result from a lack of understanding of the problem, particularly in environments that emphasize specialized tools rather than the problem itself.

Ian R. Perry
Rosieres, Belgium

Back to Top

Measure Software Success as Delivered Value to the Customer

I appreciated Robert L. Glass’s "Practical Programmer" ("The Standish Report: Does It Really Describe a Software Crisis?," Aug. 2006). This frequently cited report seems to be under fire lately—not just for its conclusion that the software business was, and continues to be, in a state of crisis. At least one study (Jorgensen’s and Molokken-Ostvold’s "How Large Are Software Cost Overruns? A Review of the 1994 CHAOS Report" in Information and Software Technology, Apr. 2006) concluded that the report’s authors were exceedingly sloppy in such basics as the degree of overrun they measured, sometimes citing a 189% overrun (289% of original) and sometimes 189% of original.

A lot depends on what we mean by "failure." Having run some 30 large projects, I recall only one coming in under budget, on time, with zero defects, and without anybody working overtime. If any slippage, additional effort, reduced scope, or over-expected defects means the project has not met its goals, then the "failure rate" of software projects may indeed be low. In practice, there is a reasonable target zone for success. A project that does indeed stick to its schedule, doesn’t involve too much overtime, delivers most of the functionality needed by the business, doesn’t blow the budget by more than a few percentage points, and the system itself doesn’t heap egregious errors on the unsuspecting customer at delivery might be counted as successful.

Few projects or organizations actually predict quality, so we usually don’t know how many defects would count as a failure. The real yardstick should be delivered value to the customer versus cost to create that value. In terms of quality the issue is not the raw count of defects but rather their effect on business practice.

My personal (though rather unscientific) observation is that some 20% of projects are pretty good, 10% are scrapped outright, and the other 70% have varying degrees of success or failure depending upon whether the organization’s glass is half full or half empty.

Phillip Armour
Deer Park, IL

Back to Top

How Definitive is the Classical Church-Turing Thesis?

Peter Wegner’s and Dina Goldin’s "Viewpoint" ("Principles of Problem Solving," July 2006) provoked strong criticism from Lance Fortnow [a professor of computer science at the University of Chicago] in his "Computational Complexity" blog (weblog.fortnow.com/, dated July 14, 2006): "… Wegner and Goldin misinterpret the Church-Turing thesis …"

What Fortnow apparently overlooked was that the classical Church-Turing Thesis—expressed as a strong identity—is committed to the following strong definition of computability:

(i) Def: A number-theoretic function/relation (treated as a Boolean Function), say F(x1, x2, …, xn), is effectively computable if, and only if, there is an effective method T such that, given any sequence of natural numbers (a1, a2, …, an), T will always compute F(a1, a2, …, an).

The classical form of CT follows from (i):

(ii) Classical CT: A number-theoretic function/relation is effectively computable if, and only if, it is partial recursive/Turing-computable.

Is such a strong commitment necessary?

The question of whether (ii) is a definition, a theorem, or a thesis was the subject of a substantive exchange of views between Church and Goedel, neither of whom was apparently comfortable accepting (ii) as a thesis.

That their discomfort was well-founded is apparent if we replace (i) by the weaker and broader:

(iii) Def: A number-theoretic function/relation, say F(x1, x2, …, xn), is effectively computable if, and only if, given any sequence of natural numbers (a1, a2, …, an), there is an effective method T, which may depend on (a1, a2, …, an), such that T will always compute F(a1, a2, …, an).

Clearly, (i) implies (iii). Hence, by the dictum of Occam’s razor, (iii) is to be preferred as the definition of effective computability.

Moreover, the significance of (iii) over (i) is that it admits the broader possibility of number-theoretic functions/relations that are effectively computable instantiationally but not algorithmically.

The significance, and necessity, of a thesis linking effective computability with recursiveness/Turing-computability then emerges if we express CT as a weakened equivalence rather than as a strong identity:

(iv) Instantiational CT: A number-theoretic function/relation is effectively computable if, and only if, it is instantiationally equivalent to a partial recursive/Turing-computable function/relation.

We can then admit number-theoretic functions/relations that are effectively computable instantiationally but not algorithmically.

Are there any functions/relations that would justify (iv) intuitively?

Clearly, Turing’s Halting function and Chaitin’s Omegas are well-defined, total, number-theoretic functions that are effectively computable instantiationally but not algorithmically.

However, in the absence of (iv), we cannot link them instantiationally to recursive/Turing-computable functions.

On the other hand, if [(Ax)R(x)] is Goedel’s undecidable arithmetical proposition, then under the reasonable assumption that if a total arithmetical relation is Turing-computable is true, then the algorithm can be converted into a proof sequence in Peano Arithmetic, it can be shown that the arithmetical relation, R(x), is effectively decidable instantiationally but not algorithmically.

This case is significant since Goedel showed (in Theorem VII of his seminal 1931 paper on undecidable arithmetical propositions) that R(x) is, indeed, instantiationally equivalent to a primitive recursive relation (which is decidable algorithmically) without appeal to any form of CT.

So Wegner’s and Goldin’s reservations about the unrestricted adoption of classical CT as definitive, and their underlying belief—that there can be effectively computable functions/relations that are not Turing-computable—ought not to be dismissed so cursorily, even though the relevance, and validity, of the particular arguments they offer in support of their belief are not obvious.

Bhupinder Singh Anand
Mumbai, India

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More