This letter is a response to various points of view expressed in the March 2000 Communications. For want of a better title, I will call it a manifesto: entitled “A Plea for Dumb Computers.”
The focus of the issue is improvement of the human-computer interface. The articles in the special section on perceptual user interfaces illustrate work on graphic interfaces, new, less-structured kinds of programming, and various “smarter” computers that sense the world around them and the intentions and responses of their users. Some of the research described is novel and ingenious.Within limits, people do what they want, so the work behind the articles has value to its progenitors they take for granted. However, all of us who work in these disciplines—that is, everyone including computer scientists, engineers, and programmers—comprise a tiny fraction of the world’s population with high internal reflection and a myopic, extremely distorted view of the outside world.
First the facts: As everyone familiar with “the world as a village of 100 people” knows, of the approximately six billion people on earth, three billion suffer from malnutrition, more than four billion are illiterate, and nearly five billion live in substandard housing. Half of the total have never seen a computer or even made a telephone call. These are just facts; I’m not trying to raise a moral or guilt issue, but a functional issue.
Second, the premise: The March cover legend reads: “The Intuitive Beauty of Computer-Human Interaction; Let users engage technology as naturally as they do the rest of the wide world.” I believe that, except for infants, a few dancers, a few artist-craftspersons, and a few natural athletes, all of whom are genetically preselected, the assumption behind this statement is false for most members of the developed world. We do not engage the wide world naturally, intuitively or gracefully. We bump, scrape, fumble, and trip; we forget things, drop things, cut ourselves, and break bones; we dent our vehicles and those of others frequently. Thousands of psychologists and family counselors will testify that, using speech and gesture skills practiced since infanthood, we communicate miserably on a regular basis, even with people that we have known for years.
We get through most days without serious injury through the rote application of learned routines not unlike simple computer programs: First your pants, then your shoes; look for the reflector to spot the driveway; push the green button to start the VCR. The addition of even simple technology to most people’s existence tends to erode their intuitive interface with the world. It is probable that many people in undeveloped regions engage the world more intuitively and naturally than the general population of computer users. (Again except for a few adepts who, if they are lucky, discover their natural abilities early in life and manage to build a career congruent with them.)
We reward these people with notoriety and public applause because they are unusual—great ballet dancers, athletes, pilots, race car drivers—all of whom have extraordinary skill at navigating the world and the tools they use to interact with it. But we use computers, and anyone who, like myself and many people reading this, has used computers every day for 30 years, knows they are poor tools with poor interfaces. They are dumb, clumsy, inefficient, and unreliable. They strain the user’s eyes, hands, and patience. Even at their best, they seldom provoke the esthetic and sensual response above and beyond mere functions found with other, more refined tools.
Computers are imperfect servants. Can you doubt for a moment that following their historic path of development they will make anything but dumb, clumsy, imperfect masters?
Is there an alternative? How about remembering what computers could and should be: not partners, but excellent tools? There is already a paradigm for the development of great tools; it’s called “refinement.” It consists not of making tools smarter but making them better at the job they are supposed to do. And guess what? The principal way this is perceived is that the tool-user interface is more natural, more graceful, more intuitive. What’s more, it’s easy to identify.
Try cutting food with a fine Japanese sashimi knife. This simple-looking object is made of steel folded and laminated many times, so the actual cutting edge is a mirror-bright line of very hard steel as thin and sharp as a razor. It doesn’t break, because it is welded inside an envelope of tougher, more ductile steel. Cutting with it is sheer pleasure, turning a kitchen chore into an effortless sensual exercise—every time you use it. The same can be said of smoothing a board with a Lie-Neilsen plane. But these are simple systems, and a computer is a complex assembly of many parts. Does the same principle apply?
You bet it does. Any pilot who has flown loops in a Bucker Jungmeister or repeatedly corrected the stubborn Dutch roll of a Bonanza knows about the refinement of complex systems. The Bucker’s marvel of harmonious control is like using a sashimi knife in four dimensions. It didn’t get that way by accident or by being made smarter. (It doesn’t have an autopilot.) It got that way because expert pilots and engineers refined it—went over the same system again and again, eliminating roughness and equalizing forces—until it became more an extension of the pilot than a tool, a plane you wear rather than fly or, as the well-known and accurate phrase goes, fly by the seat of your pants. By definition, and intuitive interface.
Where is the computer you fly by the seat of your pants? Look at the most elementary applications. Where is the word processing program that looks as clear and sharp as a sheet of paper but on to which your words flow beautifully without illogical menus and clumsy learned procedure? After 25 years, where is a version 2.0 that prints WYSIWG for all typefaces, italics, symbols, subscripts, and superscripts, instead of offering useless bells and whistles? Where is the computer whose display matches the scope, wavelength, and resolution of the human visual system and whose input matches the kinematics of the human body?
Intuitive natural computer interfaces will not evolve by making computers smarter simulacra of the brains we don’t yet understand and manage poorly but by designing better tools for the entire human race. I have to go outside of English to borrow some straighforward credos for this redirection of computer design. In 1886, Gottlieb Daimler hung a four-word framed motto on the wall of his office—”Das Beste oder Nicht”—The Best or Nothing. A few years later, the Miele company adopted an even shorter two-word motto, which it observes to this day: “Immer Besser”—Always Better. I would like to second the impulse for a more intuitive, natural human-computer interface. After reading the March Communications, “smarter” has a hollow ring and is, by itself, a false goal. I think we will arrive at intuitive interfaces only via “better” as in basic refinement, if only we have the patience, skill, and brains for it.
Morton Grosser
Menlo Park, CA
Perceptual User Interfaces sounded like a great idea, until I got to remembering about “Risks” (Communications’ monthly back-page column) and about my real-life experiences with Windows and Word. How long has it been since you were composing some document, and Word did something you didn’t expect or want—and you couldn’t figure out (or find out from that ridiculous “Help” guy) how to disable the thingy that was doing something “for you” you didn’t want done at all? As Pentland says (“Perceptual Intelligence,” Mar. 2000, p. 35), “…these new systems could become more difficult to use than current systems.” He goes on to explain how he and his co-workers have overcome some of the errors made by early PUIs. But how do we get away from the Walt Disney World he is developing “for us,” if it does something we don’t like, just as Word did “for me?” If a pervasive PUI misperceives as new neighbors from down the street what I perceive as a home-invasion robbery—Who ya gonna call? Ghostbusters?
When we continually read of systems, apparently carefully designed and developed, that still fail, how will we come to trust the adaptive, reactive PUIs? I even wonder whether such omnipresent technical gizmos may shock their developers by triggering a backlash, like what Monsanto got from their genetically modified organisms, which I’m told the company envisioned as helping humankind. I suggest PUIs need a whole lot more ordinary-user, truly collaborative development, than they are getting nowadays.
Don Walter
Los Angeles, CA
I find it somewhat ironic that Robert Glass took such pains to distinguish Edward Yardeni from Ed Yourdon in his “Practical Programmer” column (Mar. 2000, p. 17) on gloom-and-doom predictions prior to 2000. I heard Yourdon speak at a Y2K conference in Phoenix at which I was also a presenter, in 1998. He was by far the most negative, and alarming, of all the people I heard discussing the potential consequences. In informal remarks after one of his talks, he said he was moving his family to a small town near Taos, New Mexico, away from the major cities, with their potential for major social unrest. He believed there would be a reliable water and power supply and was planning on stocking a couple years’ supply of food. His remarks sent a chill through the room. I never kept close enough track of his movements to discover if he actually followed through.
Steve Greif
Columbia, MD
Glass’s column contains its usual high measure of common sense. Thank you.
Having been responsible for the Y2K activities of a Fortune-500 company, I’d like to offer one small observation: A fraction of the money spent on Y2K was devoted to finding and eliminating problems concerning date calculations. I don’t know exactly what this fraction was but would estimate it from 10% to 33%. The remainder was spent proving that systems (of all types) would operate predictably as the date changed.
This latter exercise was optional. Many enterprises believed the exercise was justified and carried it out. Others chose, deliberately or otherwise, not to do this work. Whether some or all of this type of expenditure was wasted remains a matter of opinion, and I don’t really mean to express an opinion. If forced to, I would say there was waste. Had there been less hysteria, we would have tested fewer systems and tested them under fewer scenarios; my company had a test suite that looked at more than 20 future dates.
Bernard Abramson
West Point, NY
The environment Fayad, Laitinen, and Ward write about in “Software Engineering in the Small” (“Thinking Objectively,” Mar. 2000, p. 115) is exactly what I encountered from 1988 to 1997 while working for a software startup on programmer utilities (source compare, version tracking, requirements definition) for mid-size business IT departments. “Time-to-market” was critical, especially with two or three direct competitors. “Incremental development and release” was a must in order to satisfy customer demand for specific features. And, yes, beta software was shipped to customers— at their request. In my experience, customers are not willing to pay what it would cost to write bug-free software, and they don’t want to wait for the features they need. So they accept bugs (within reason) in beta versions so they can solve their business problems in a timely fashion. However, we did some things right. From day one we used a development environment framework—developed in-house—that allowed us to quickly generate structured, consistent, and compatible programs. We almost always included our best customers in the requirements definition and often had them review the results. We also conducted peer reviews on an occasional basis, due to the critical nature of particular modules and to keep our developers honest. I left the company when I thought it had grown enough to begin to take advantage of more advanced methodologies (separate QA, and component libraries), but it refused.
I look forward to the next installment of “Thinking Objectively.”
Robert LeMay
Downers Grove, IL
In regards to “Software Engineering in the Small,” I found it refreshing to see an expanded field of view beyond the usual megalithic super project (MSP).
But perhaps, if anything, the perspective the authors introduced needs to be taken further in several directions.
The column focuses primarily on the creation of “product” as the end goal of software engineering. I would argue that we have already entered an era in which product is giving way to service, that is, software product is giving way to software-enabled services as the dominant economic form of the day (and is “where the big money is”). The financial heroes of the Internet are mainly service oriented, such as Amazon.com and e-Bay. Before they fall behind, the large, established Internet product companies are quickly trying to catch big pieces of Internet services: Cisco, Intel, Microsoft, and Sun.
This appears to be part of the shift from the product orientation of the industrial era toward the focus on services for specific communities of common interest in today’s network information era. For example, I believe Jack Welsh of GE recently outlined a strategy based on services for tangible and cyberspace objects as the future for GE in which the Internet plays the critical role.
The leveraging of background software assets to deliver superior services has a very different dynamic from the environment that produces product for sale to outside users.
Equally significant, it also has a very different flavor from the custom software development model underling the traditional militarily inspired software engineering practices, as the authors rightly call into question.
Service-enabling software must have robust and flexible qualities, even though it may be used by only one internal client, but indirectly such software perceptively reaches millions of customers through the end services it provides. This is new and largely unexplored territory, and I have “felt the pain” of several clients struggling in agony with this problem.
We now have three distinct software environments corresponding to three distinct economic (business) models: custom software development specifically for somebody other than the development organization (single customer); software product development for sale to many outsiders (multiple customers); and service-enabling software that can have single or multiple direct customers but transitively has millions of customers who use the end service.
Conventional software engineering practices scale up very poorly from one individual project at a time, especially if it is an MSP. Typically, the MSP has a world view that it is the world, at least the part of the world that really matters.
This inadequacy can be seen from the perspective of two closely related but distinct cousins: enterprise architecture and product line architecture. Enterprise architecture must structure and control many diverse applications at once, while product line architecture applies to many related but distinct products.
Both fields involve large amounts of “software engineering in the large,” among other things. They differ largely in that enterprise architecture is mainly oriented toward enabling service, while product line architecture is mainly oriented toward products for outside delivery. Nevertheless, conventional software engineering is surprisingly silent on the needs of both these critical areas, which involve oversight of multiple projects, stewardship of vast investments in existing technical assets, as well as long-range planning on an enterprise and multi-enterprise level.
The authors have made a good start by asking important questions about root assumptions held by the software engineering community. Hopefully, this will lead toward much-needed improvements and will open the door to further inquiry.
Rob DuWors
Milford, OH
Join the Discussion (0)
Become a Member or Sign In to Post a Comment