Computing Applications Forum


  1. GSD Benefits Still Not Worth the Trouble
  2. Let Interactive Genetic Algorithms Provide Music Accompaniment
  3. Insist on Risk Management in Software Engineering
  4. Could One Head Be Better Than Two?
  5. Would Church or Turing Make Every Function Computable?
  6. Still No Proof of the Unsolvability of Turing's Halting Problem
  7. References
  8. Author

While guest editors Pär J. Ågerfalk and Brian Fitzgerald are to be commended for their special section "Flexible and Distributed Software Processes: Old Petunias in New Bowls" (Oct. 2006), the reality of the subject it covered—globally distributed software development (GSD)—fell short of the section’s claim that the challenges of the technology can be addressed through agile/flexible methods. I counted 12 separate figures, tables, and diagrams with various combinations of solutions arrayed against the related GSD problems. Alas, not one of them included empirical evidence that it improves quality, reduces cycle time, or makes it more likely developers will meet schedule and cost goals. Ågerfalk and Fitzgerald may not have intended for me to reach this conclusion, but I am more convinced than ever that GSD problems outweigh GSD benefits.

Girish Seshagiri
Annandale, VA

Guest Editors’ Response:

Seshagiri is correct in his analysis that the articles in the special section did not reach consensus on how best to overcome GSD problems, a conclusion we anticipated and find intriguing at the same time. More important (and this goes beyond the articles in the section) is acknowledging that there are indeed GSD problems. But are there also benefits? To answer, as we discussed in our "Introduction," this is why and where we choose to focus our own research effort.

Pär Ågerfalk
Brian Fitzgerald
Limerick, Ireland

Back to Top

Let Interactive Genetic Algorithms Provide Music Accompaniment

In their article "Music Score Alignment and Computer Accompaniment" (Aug. 2006), Roger Dannenberg and Christopher Raphael speculated that accompaniment systems "will someday also find a place in jazz and other improvisatory music." With this prospect in mind, I’ve been exploring an existing project called GenJam (short for Genetic Jammer) at www.it.rit.edu/~jab/ GenJam.html run by Al Biles of the Rochester Institute of Technology, Rochester, NY. Since 1993, Biles has been developing this jazz accompaniment and soloist program (he calls it "the software sideman in my Virtual Quartet") that uses genetic algorithms to evolve its accompaniment style.

Trevor R.H. Clarke
Dayton, OH

Back to Top

Insist on Risk Management in Software Engineering

While I find myself in agreement with what Václav Rajlich had to say in his article "Changing the Paradigm of Software Engineering" (Aug. 2006), the possibly single most important aspect of this type of fleet-footed software engineering—placing it in a risk-management context—played little or no part in the article. Rajlich outlined several current tendencies in software development intended to help align it with business processes, along with a series of open questions requiring further research. Small changes to that software may inadvertently but radically alter its business risk profile; if the software development process is not placed solidly in a risk-management framework these changes may well go unnoticed. For more on this subject see Gary McGraw’s book Software Security: Building Security In (Addison-Wesley, 2006).

Niels J. Bjergstrom
Lincoln, England

I enjoyed Václav Rajlich’s article (Aug. 2006) but expect even more revolutionary paradigm changes in, say, the scope of software engineering. For example, integrating nonprogrammer-oriented tools and associated culture in a software-development process is a significant issue in many projects; Web development is a good example.

Flash artists and ActionScript programmers work together, even during implementation. And tools help non-IT professionals participate deeply in a development process, not just in requirements specification. We need to include even more non-IT activities and professionals in that process, along with ways to prevent isolating them on the other side of the requirements wall.

Requirements volatility is related to the issue of having to involve the entire scope of a software development process in all major decisions. Software requirements—the interface between a particular development process and the rest of the world—is a bottleneck. Needed is better integration of software-development practice and general product design and development practice. Unfortunately however, integration may also yield domain-specific flavors of a particular development process.

Integrating software development into non-IT domains and non-IT professions into software development overturns traditional boundaries of software engineering. New software engineering paradigms are thus needed to address the related implications.

Nebojso Vasiljevic
Belgrade, Serbia

Back to Top

Could One Head Be Better Than Two?

A question raised by Charlie McDowell et al. in their article "Pair Programming Improves Student Retention, Confidence, and Program Quality" (Aug. 2006) relates to the claim that paired students produce significantly better programs than those who work alone. The authors did not mention whether they gave all the students participating in their experiment the same amount of time to complete their assignments. If they did, then the pairs likely spent significantly more working hours on them, possibly accounting for the reportedly better programs. More information is needed.

McDowell et al. wrote "Students in the pairing sections submitted a list of three names of potential partners, and partners were assigned based on these preferences. In nearly all instances, students were assigned a partner from their list." Partnering with a friend or compatible person on difficult intellectual work generally produces significant benefits and could account for some or even most of what they might report later. This factor should be removed from the study in order to produce a true picture of paired programming, especially since the assignment of personnel in industry is generally the prerogative of their supervisors.

Alex Simonelis

Authors’ Response:

We gave the students roughly the same amount of time (about two weeks) to complete the assignments and asked them to self-report the time they spent on the programs. Somewhat to our surprise, those working in pairs reported spending more time than those working alone—200 hours per student in a pair vs. 170 hours for solo students over four projects.

Although our analysis of the times that were reported made us somewhat skeptical about accuracy (for example, unusual numbers of hours were reported as multiples of five), the reported differences were statistically significant based on the Analysis Of Variance test.

Because we were studying the efficacy of pair programming in an educational setting, we were not bothered by the possibility that the higher scores were due to greater effort on the part of the paired students. Quite the contrary, we were delighted that working in pairs motivated them to spend more time on their assigned tasks. Likewise, if their self-selected choice of "a friend or compatible person" helped motivate or otherwise help them achieve superior results, then it would seem that more learning was going on, and hence our educational objectives were being advanced.

Charlie McDowell
Linda Werner
Heather E. Bullock
Julian Fernald
Santa Cruz, CA

Back to Top

Would Church or Turing Make Every Function Computable?

Bhupinder Singh Anand’s Forum comment "How Definitive is the Classical Church-Turing Thesis?" (Oct. 2006) in response to Peter Wegner’s and Dina Goldin’s "Viewpoint" ("Principles of Problem Solving," July 2006) trivializes the subject by making every function computable. Anand’s proposed definition—"effectively computable instantiationally"—amounts to the following: A function F is computable in the sense that if, for any input values, there is an effective method T, possibly depending on these input values, that computes the corresponding value of the function. Thus, the choice of T can be different for each input. Now, let T0 be the effective method that always outputs 0, let T1 be the effective method that always outputs 1, and so on. For any particular input to F, the output will be some number, say, n. Now choose the effective method Tn.

One might object that although each output is produced by some effective method for a given input we don’t know which effective method to choose. If a requirement is added that there be some effective method for picking the appropriate method, then the definition becomes equivalent to the usual Church-Turing thesis.

Melvin Fitting
The Bronx, NY

Author’s Response:

Fitting’s argument that every function can be shown to be effectively computable instantiationally under the proposed "redefinition" assumes that, given F(x1, x2, …, xn), we can effectively "know" the value F(a1, a2, …, an) for any given (a1, a2, …, an) in order to define its effective methods {Ta, Tb, …} where Ts outputs F(s1, s2, …, sn) on input (s1, s2, …, sn). This presumes, of course, that F is effectively computable instantiationally.

Bhupinder Singh Anand
Mumbai, India

Back to Top

Still No Proof of the Unsolvability of Turing’s Halting Problem

The assertion by Peter Wegner and Dina Goldin in their "Viewpoint" ("Principles of Problem Solving," July 2006) that Alan Turing’s famous paper introducing the Turing machine [3] contained a "proof of the unsolvability of the halting problem" is incorrect. Turing discussed and proved that a property of Turing machines called "circular" cannot be tested by a Turing machine. A Turing machine is circular if it "never writes down more than a finite number of symbols of the first kind" where "the first kind (called figures) consists entirely of 0 and 1 (the others being called symbols of the second kind)" [3]. (See Turing’s original paper for the significance of these concepts.)

Machines that halt are circular. However, non-halting machines that never write more than a finite number of 0s and 1s are also circular. So these properties are not equivalent.

Turing subsequently proved that a Turing machine cannot test whether a machine ever prints a particular symbol. This property is also not equivalent to the halting property.

The mistake of placing a proof of the incomputability of the halting problem in Turing’s paper [3] was also made in [1].

The earliest published proof I know of that the halting property cannot be tested by a computer was in [2], but earlier proofs may well exist. Indeed, the author of [2] recalled that Turing gave him a verbal proof in 1953.

Thorkil Naur
Odense, Denmark

Back to Top

Back to Top

    1. Chaitin, G. The Unknowable. Springer, 1999.

    2. Strachey, C. An impossible program. Computer Journal 7, 4 (1965), 313.

    3. Turing, A. On computable numbers, with an application to the Entscheidungsproblem. In Proceedings of the London Mathematical Society, Series 2, Volume 42 (1936–7), 230–265.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More