Opinion
Artificial Intelligence and Machine Learning Viewpoint

Theory, Theory on the Wall

Could it be that computing itself is just too unwieldy a field for any theory of substance?
Posted
  1. Article
  2. Author

Theory, theory on the wall, tell me who is the fairest of them all? How do we determine if a theory effectively captures the spirit (and hence the beauty) of a field? For instance, human-computer interaction (HCI) is a field of computer science that purports to guide designers in organizing the interaction with computers. Over the years, many tips and wisdom of the craft have accumulated to do just that. But yet, where is the theory in HCI that will pull all this together into a coherent picture?

Looking at the recent book, Human-Computer Interaction in the New Millennium (ACM Press, 2002), one sees a lot of talk, but little substance. Maybe that just reflects our academic way of doing things nowadays, when the measure of an academic is often the quantity of talk rather than its quality or originality. But is that the gist of it, or maybe just an aspect of it?

Perhaps the problem of theory goes deeper than the sociology of the scientific community. Could it be that computing itself is just too unwieldy for any theory of substance? That might well be the case, not in areas such as hardware or foundations, but rather in areas such as HCI or software methods, areas more on the human side of the computing equation. Let’s see what we can make of it.

Anything dealing with the human connection must rely on the underlying human sciences, in particular psychology. And yet, theory building in psychology is fraught with difficulty. I speak here of scientific theory building, which has an integrative goal, not of wild speculations of an interpretive nature, such as psychoanalysis, for example. Speculation in theory is important and needs to be encouraged, but factual integration remains an essential aspect of theory, not to be forgotten.

As I’ve said, theory building in psychology is fraught with difficulty. Take learning theory, for instance. John Anderson’s theory of learning is perhaps the most well-developed theory in the field and accounts extremely well for the learning interactions that take place within highly programmed learning environments, such as those of computer-based tutors. But it will not handle the looser learning environments typical of traditional learning settings.

Attempts to generalize a theory beyond its original contexts of study, a natural inclination in theory building (to be resisted, of course) can lead to ridicule, as happened to old man Skinner, who insisted his behaviorism explained all learning. The later demise of programmed learning as an instructional technique showed the boundaries had been overstepped.

Theory succeeds well in the physical realm, where physical laws seem robust and unification proceeds with gusto. Seeking unifying models of phenomena is, after all, the prime role of theory. Why so successful there? A question of much greater funding? Or simpler phenomena? Medical science does show us grappling with tremendous complexity. And the applied side of physics (engineering) is short of theory and relies on the underlying physics for its models.

Computing itself is an aspect of engineering, dealing, however, with the virtual realm. Computing deals with representations, be they complex simulations or simple numbers. While it has important consequences in the physical and mental realms we all operate in, computing’s virtual realm (see Communications, Feb. 2000) is artifactual, that is, designed. And as such, constrained. In its constrained areas, computing theory is successful.


We end up with more and more computing being hidden under the hood, the driver in control but uninterested in the operation of the engine.


But yet, because of its nature as an abstract tool representing and manipulating things out there in the world, it remains as creative, open, and artistic as the phenomena it represents. And so, it must tie into the fabric of human psychology if we humans are to find it at all useful. It must deal with HCI. And it must confront the challenges of HCI.

Human complexity is of course legendary, hence the difficulty. Wouldn’t computing be great, many a software engineer has asked, if it weren’t for those users? All in good jest, of course, but something more fundamental may well lie hidden beneath the jesting.

On one hand, we accept the self-importance of humans, as they insist on being in charge and in dealing with all the little decisions made along the way when accomplishing some task. On the other hand, the story of computing is one of the distancing of computing itself from initially the programmer and later on from the user, who is eventually quite happy to hand over the nitty-gritty as long as control is clear.

And so, we end up with more and more computing being hidden under the hood, the driver in control but uninterested in the operation of the engine. As the technology of agents matures and expands, we will enter a new chapter in the history of computing. In particular, HCI will expand to encompass interactions with agents and among agents. Computing will deal a little less with human interactions. Not that concerns will disappear, though—ethical and spiritual issues will proceed to the fore with constant questioning about the human role in all this.

Agent interactions seemingly appear simpler than human interactions, and hence easier to build theory around. Not so, for we will seek to build those agents with as much intelligence as we can, eventually providing them with as much autonomous agency as we dare. We will not reduce the decision context, but rather continue to populate it with further decision makers.

And HCI will continue to thrive, most likely dealing with psychological issues touching on interests of the user and the psychological environment of the task more than the traditional issues of cognitive capacity and sensory interaction.

The prospects for theory in all this? The difficulties seem to have just expanded manifold, and with it the hope for general theories within computing. As we increase the complexity of our world (just contrast today’s work environment with that of a century ago, for instance), increased contextualization of theory may be inevitable. Despite the fact that, deep down, all interrelates and thus is unified.

Theory, theory on the wall … there is no magic mirror after all. But then, who knows what non-human-centric computing will come up with, theory-wise.

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More