Opinion
Computing Applications Forum

Forum

Posted
  1. In DRM, Rethink Business Practices, Not Just Technology and the Law
  2. Don't Forget Mental Models in Systems Development
  3. What the Turing Machine Doesn't Model
  4. References
  5. Author

The digital rights management (DRM) technology discussed in the special section "Digital Rights Management and Fair Use by Design" (Apr. 2003) has difficulty simultaneously serving commercial needs and the public interest because of the deep tension among its wide-ranging goals—honoring fair use, providing appropriate economic benefit to publishers and authors, and enabling interoperability across multiple system vendors for general digital access. That tension results from the asymmetry in resources among publishers, vendors, and consumers. When publishers lose control of valuable works, who is liable? The platform vendor? The party misusing fair use? The virus disabling the DRM software? This question can’t be addressed completely through variations in DRM architecture; the bar needs to be raised periodically in trusted systems technology.

In the article "The Bit and the Pendulum: Balancing the Interests of Stakeholders in Digital Publishing" (American Programmer, Sept. 1997), Alex Silverman and I proposed an approach to this conundrum based on industry-funded insurance, fair use "licenses," and an international organization of stakeholders we called The Digital Property Trust. I bring these ideas to your attention in the interest of extending the public discourse on DRM to increase recognition that the problem is not just about technology and the law. We also need to rethink our business practices in order to craft a sustainable solution.

Mark Stefik
Palo Alto, CA

Back to Top

Don’t Forget Mental Models in Systems Development

Reading David E. Avison’s and Guy Fitzgerald’s "Where Now for Development Methodologies?" (Jan. 2003), I was struck by the continuing nonrecognition of the central role played by mental models in the development of IT systems.

No matter what activity we may engage in, we all operate within the framework of our own mental models of the activity in question. Thus, if an IT system is being developed to assist the actors in a particular task realm it is essential for the success of that development that the IT system—really just a computerized model of the task realm—fits comfortably with the mental models of the task realm held by the actors.

Similarly, developers of IT systems for use within a task realm develop these systems according to the mental models they have of the task realm. If their models do not correspond closely to the mental models of the actors in a particular realm, then the IT system they develop is unlikely to satisfy the actors.

The root problem facing IT developers is how to capture and document the mental models of the actors in a way that will be understood by both the developers and the actors and thus verified for accuracy by the actors.

Some years ago when I was lecturing in business information systems at university I taught my students to use an adaptation of semantic nets for this purpose. These nets consist of the actors’ own terms for the concepts in task realms, so they are easily understood by the actors.

Rory Short
Wits, South Africa

Back to Top

What the Turing Machine Doesn’t Model

I was somewhat puzzled by two statements in the Technical Opinion column "Computation Beyond Turing Machines" by Peter Wegner and Dina Goldin (Apr. 2003).

The first is that Turing machines are, more or less, inadequate for modeling modern computer systems. This is somewhat implied in the column (though explicit in [1]): "[T]he Turing machine model of computation […] falls short when it comes to modeling modern computer systems, whose hallmarks are interaction and reactivity."

It appears this statement is based on confusion between the idea of modeling a machine and of modeling an application of that machine. Computers today, just like their counterparts 40 years ago, are general-purpose symbol-processing machines. The logical model of the machine hasn’t changed, and, just like Turing machines were an adequate model of physical computers back then, they still are today, simply because the machines they model haven’t changed fundamentally.

It is immaterial that the focus of our applications has shifted from "number crunching and data processing to embedded systems and graphical user interfaces." The authors seem to forget that computers perform symbol manipulation, not number crunching (whatever that is), that embedded systems and graphical user interfaces are indeed examples of data processing, and that user interfaces are not, per se, an application, but only a way to interact with an application that does something useful.

The machines that do these things are basically the same, and the same model applies to them, in the same way the abstract model of a TV set is always the same, regardless of whether it is used to watch Citizen Kane or the latest reality show.

The authors might argue that computers are not the best type of machine for some of the applications we use them for and that we should change our model of computation—and, therefore, the machine as well. There may be some merit to doing so, but this doesn’t appear to be what the authors are saying; they seem quite happy with computers as they are and just want to change the model.

The second statement has to do with the notion of including interaction in the computational model, a solution that appears to present a number of problems. Whereas user input is unpredictable, viewing the user as part of the model leads to unpredictable behavior of the complex "machine + user." This unpredictability is different from nondeterminism if we assume the number of options available to the user is unlimited. This may sit well with the authors, as they seem to advocate the transformation of computer science into a nonmathematical discipline. But if that is the case, it is difficult to see how one can determine even whether a program works correctly (let alone design a program so it works correctly), since our only model is complete unpredictability.

However, if we restrict the user’s input to a finite set of possibilities, then our goal is to determine the reaction of the system to every possible input, a task for which Turing machines are perfectly adequate.

Including the human component of the interaction in the model has some obvious advantages. For one, all questions about machine intelligence disappear instantly. Unfortunately, this might happen only by cheating, so to speak.

Consider the following interactive machine for intelligent decision making; the input is a string X that formalizes a problem statement requiring intelligence for its solution:

  1. Print X
  2. Ask the user for a decision
  3. Print the decision

The "interactive complex" may be intelligent but doesn’t do computer scientists much good. In other words, it seems like the only way in which an interactive machine can compute non-Turing-computable functions is by interacting with something that, by itself, can compute non-Turing-computable functions. Unless this something is modeled (and the authors give no clue as to how to do it), the model begs the question of computability while providing no help in modeling computing systems.

One can be more specific on these points by looking at [1]—a more technical article with significantly more detail. Its main theoretical construct was the Persistent Turing Machine. Such a machine has a series of "macro-steps" <wi, w> => <w’, wo>, each a normal Turing machine computation where wi is the input of the step, w is the initial state on the state tape, w’ is the final state on the same state tape, and wo is the output of the step.

The word "persistent" derives from the fact that the state tape is preserved from one interaction to another—a fact easily simulated on a Turing machine by making the state part of the output of a step and using it again with the next input.

A computation, according to [1], is a sequence of such steps in which the output wo of a step is transformed into the state wi of the following step by an external agent, that is, an external agent does the transition <w’, wo> => <wi, w’> (note that the external agent doesn’t alter the state). What is this external agent? If it is a Turing machine, then the class of functions computed by this model is the same as the ones computed by a single Turing machine. If the agent is capable of computing non-Turing-computable functions, then the whole system obviously has more power than a Turing machine.

However, the crucial point is that, unless such an agent is another Turing machine, the model doesn’t provide a model of it. The model confirms a Turing machine is perfectly adequate for modeling the computational part of the system.

Now consider another analogy: While true that Clark Kent in a phone booth is transformed into Superman, having a good model of the phone booth doesn’t tell us how the transformation takes place.

The model might be perfectly reasonable for modeling computers in interaction systems. But the fact that the complete interaction system (computer plus person) might have more than Turing capabilities doesn’t make the model per se go anywhere beyond the Turing machine. The trick, as in the case of Clark Kent, is in the part we are not modeling.

Simone Santini
San Diego, CA

Authors Respond:
Our notion of what is computable has changed greatly over the past 40 years. Moreover, as stated in our column, even Alan Turing himself showed that Turing Machines could not model all mathematical and/or computational problems. Super-Turing models beyond traditional Turing Machines are necessary for problem solving; interaction and Persistent Turing Machines are useful for this purpose.

The claim that Turing Machines provide a complete model for all problem solving was wrong when proposed in the 1960s and should be replaced as a foundation for models of computation by new models along the lines we propose.

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More