A few years ago, a group of heavyweight thinkers attempted to invent interactive TV. There were product managers, human factors engineers, usability specialists, and designers of every conceivable variety—graphic designers, broadcast designers, interaction designers, industrial designers, and sound designers. There were also creative directors, producers, film directors, creative agents, software engineers and computer scientists. Expert after expert took the floor to tell the others his or her view, theory, model, statistics, focus group results, and so on.
On the second day, one participant said something that really upset those of us in the human-computer interaction contingent. His basic contention was this—we could talk about focus group findings, participatory design sessions, theories, methods, statistics, designs (user-centered1 or otherwise) until either (a) the end of time or (b) the money ran out. In the meantime, some kid in a garage would invent the next big thing right under our noses. It was not long afterward that a little start-up called Netscape took the world by storm.
We contend that the skeptic was at least partially right, at least in a figurative sense. Of course, many great things are actually invented by kids in garages, the Jobses and Wozniaks2 of the world. But inventions also come from old people in their basements, which are often warmer than garages, and middle-aged scientists working for big companies like 3M, undoubtedly working on the next Post-Its, and men and women in university research labs. They come from people who didn’t graduate from high school, like Edison, as well as people with Ph.D.’s like Douglas Englebart (the mouse) and Erno Rubik (Rubik’s Cube). The fact that many successful inventors have Ph.D.’s demonstrates that it is possible to do something practical with the degree. College dropouts are also well represented in the inventors’ ranks—Bill Gates, Edwin Land (the Polaroid camera) and Buckminster Fuller (the Geodesic dome) are prominent members of this group. These three all dropped out of Harvard, leading to a plausible hypothesis that Harvard is a very good place from which to drop out.
What Gives Inventors the Ability to Invent?
It doesn’t seem to be extraordinary intelligence. Some have reported that inventiveness is not related to high IQ [6]. For example, Nobel Prize winner, Richard Feynman, one of the extraordinary geniuses of this century, had an IQ of 122 [8]. This is good news for all of us who do not belong to Mensa.
A theme recurrent in many writings on creativity and invention is that inventors are, in many ways, like children [5, 6, 9]. Writing on this topic, inventor George Margolin quotes Edison: "If you never grow up you’ll never grow old." Margolin goes on to state that the mind of the inventor is that of a curious child, not that of a smart adult. Seeing the world with new eyes is one of the ways that the inventor creates new things by improving upon old things [7]. In a similar vein, Richard Levy, another prolific contemporary inventor and author [6], states that children invent because they don’t realize that they can’t.
Inventive People, Not Methods
Inventions seem to come from inventive individuals, not from the use of any particular method, whether it is focus groups or marathon "ideation" sessions. We do not suggest that soliciting users’ views or collaboration with colleagues is unimportant;3 however, they may be overrated as contributors to invention. Nor are we suggesting that science and theory are unimportant,4 quite the contrary. In his seminal work, Creativity, Mihaly Csikszentmihalyi stresses that inventive individuals must first master the "symbolic systems" of their fields. The symbolic system may be grammar and style, mathematics, music theory, engineering fundamentals or theories of human-computer interaction. Picasso mastered conventional painting techniques before inventing his own revolutionary style. Likewise, mathematical breakthroughs generally require a very sophisticated understanding of fundamental principles. Inventions are new ways of doing things. If one is to create a new way, he or she has to be familiar with the old way, or the dominant paradigm.
Theory Does Not Cause Invention
Theory, important as it is, does not cause invention, rather, inventors cause invention. According to Margolin, what seems to be important for invention is being "less afraid of making mistakes than doing nothing important enough, difficult enough or worthwhile enough to cause mistakes." The inventor will somehow deal with 99 mistakes in order to have one success. When the inventor succeeds, theories must often change in order to explain the invention. So while it may be true that theories don’t cause inventions, inventions do inspire theories.
Theory Follows Invention?
This is an interesting notion, that invention precedes theory. Can it be true? The notion flies in the face of conventional wisdom, which holds that inventions are derived from methodical application of scientific theories. Many who have studied the history of invention and innovation have questioned the conventional wisdom. In his book, The Evolution of Technology, George Basalla [2] concludes that inventions evolve gradually with small improvements over previous inventions. Scientific theory, while often playing a significant role in the education of the inventor, plays only a minor role in the actual invention process. In their analysis of the history of technology, John Carroll, Wendy Kellogg, and Mary Beth Rosson conclude that invention and innovation proceed primarily by emulation of prior art. "Deduction from scientific principles has always played a minor role. Moreover, details of both the design and its context are critical determinants of viability. A ‘major’ innovation typically depends on a variety of more modest innovations, craft techniques, and even happenstance. Finally, design is an iterative process" [4].
A theme recurrent in many writings on creativity and invention is that inventors are, in many ways, like children.
Carroll, Kellogg, and Rosson cite the history and invention of the steam engine to illustrate their point. Contrary to folklore, the "invention" of the steam engine took place over more than 60 years. Basalla concludes that the only contribution of scientific theory to the evolution of the steam engine was the notion of vacuum [2] cited in [4]. The rest proceeded through many inspired iterations of trial and error. Working steam engines, however, provided much inspiration for theory.
What holds true in the world of physical artifacts like steam engines seems to be particularly true in the field of HCI. Carroll, Kellogg, Rosson, as well as Barnard, observed that innovations in the design of user interface artifacts have almost always preceded theory, rather than the other way around (for instance, the case of direct manipulation) [1, 4]. In other words, designers design solutions to things they perceive to be problems. Some of these attempted solutions (read: inventions) eventually become recognized as useful and/or usable, then psychology steps in to explain them. The explanations evolve into theory, which becomes part of the dominant paradigm from which future generations of HCI professionals learn their art (or science, if you prefer). This process can be viewed as an evolutionary spiral, in which successive generations learn from the explanations of previous generations’ inventions.
Pay attention to the things people use that can be made better, listen to and observe regular people, know your field, but most of all, occasionally stop acting your age and start acting like a kid. Who knows; you may the one who comes up with the next direct manipulation paradigm, hypertext, successor to the Web, Netscape, mouse, GUI, PC, PalmPilot, universal remote control, paper clip, Velcro, or Post-It Note.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment