Consider the question of water needs associated with the vegetation appearing in the accompanying figure: what do you think is the correct answer? Most readers are likely to answer “C.” However, findings from a study conducted by Haney and Scott3 reported there were people that chose B. Why? When they were asked to justify their supposedly wrong answer they explained that, as the cabbage is not in the pot, it means it has been picked and therefore does not need water anymore. What an example of disarming reasoning! In this case, answering B was categorized as incorrect because the test writer, who assigned the correct answer to C, did not predict it.
In computer science there are many cases of unexpected use of systems that were unpredictable for their designers. An example is how some users build up large databases in spreadsheet programs because they find them easier to use than regular database programs. The lesson learned from these examples is that people think and express in ways that cannot always be predicted by designers, in ways that are not necessarily wrong. This assumption is not often considered when designing interactive systems and the history of interaction between humans and computers is full of examples of users’ adaptation to designer’s choices. The development of new, gestural interfaces (for example, touch-based devices like smartphones or tablets) seems to follow this pattern.
We are accustomed to interact in an environment that inhibits our innate interactive capabilities. In Nicholas Negroponte’s words, our connection to computers is “sensory deprived and physically limited.” The mouse is the clearest example: a device that only provides two degrees of freedom (DOF), versus the 23DOF of our fingers. The mouse rules the desktop computing world and continues to be popular in home and office settings. We have to be grateful for Engelbart’s invention, which was a breakthrough in computer history and promoted a new way for users to interact with computers via a graphical interface. The mouse is a great input device but surely it is not the most natural one. A user must learn how to work with it and, despite it being simple for insiders to use, there are many people who feel disoriented by their first encounter with a mouse.
Gestural Interfaces
With gestural interfaces, we strive for natural interaction. Today the word natural, as in “natural user interfaces” (NUI), is mainly used to highlight the contrast with classical computer interfaces that employ artificial control devices whose operation has to be learned. Nevertheless, current gestural interfaces are based on a set of command gestures that must be learned. We believe that, by means of a natural interface, people should be able to interact with technology by employing the same gestures they employ to interact with objects in everyday life, as evolution and education taught us. Because we have all developed in different environments, individual gestures may vary.
The main point we have to face when designing for gestural interfaces: Are these interfaces natural only in the sense they offer a higher degree of freedom and expression power, when compared with a mouse-and-keyboard interface? Or, are we really aiming at empowering users with a means to communicate with computer systems so they feel more familiar with them? Recently, Don Norman claimed that “natural user interfaces are not natural,”4 assuming they do not follow basic rules of interaction design. We also believe they cannot be, currently, called natural. Conversely, our criticism lies on reflections on cultural aspects.
Users should not have to learn an artificial gestural language created by designers and that depends on the device or even on the application. In this context, each company defined its guidelines for gestural interfaces, in an attempt to stand out as a de facto standard. Imposing a standard, especially when it is about cultural topics (such as gestures), can easily fail due to natural differences among human beings: for example, think about the Esperanto language, which failed to be widely adopted because of its artificiality. Technology is an interesting aspect to consider because, even if a gesture is not the most natural one, it can become natural due to the widespread use of technology adopting it. But, again, when it comes to cultural issues this can be quite difficult, as in the case of the English language, which is imposed as the de facto standard. Nevertheless, non-native speakers will always encounter difficulties in being proficient.
The main aim of natural interfaces should be to break down the technology-driven approach to interaction.
Consider the tabletop multi-touch environment in which moving the first finger of each hand away from each other is mapped into the “zoom” action. Is this natural or just an arbitrary gesture, easy to recognize and learn? Have you ever seen a person make that gesture while speaking with another person? While we interact with new technologies in a similar way we interact with the real environment, new natural interaction systems do not take into account the spontaneity of the users—actually they inhibit the way users naturally interact, because they force them to adopt a static and already defined set of command gestures. In their pioneering work with the Charade system, Baudel and Beau-douin-Lafon1 partially faced this quandary. They avowed that problems with gestural interfaces arose because users must know the set of gestures allowed by the system. For this reason they recommended that “gestural commands should be simple, natural, and consistent.” However, this does not represent a real solution to the problem, as users were not free to interact naturally but once again they were forced to learn an artificial gesture vocabulary. Furthermore, in real scenarios the appropriate gesture depends also on the context, domain, cultural background, and even ethics and human values.2
Interpreting Gestures
From our everyday life we know the same meaning can be expressed by different gestures: for example, a handshake is a universal sign of friendliness, but only if our universe is limited to the occidental world. In India, the common way to greet someone is by pressing your hands together, palms touching and fingers pointed upward, in front of the chest (the Añjali mudra). Vice versa, the same gesture can have a different meaning depending on cultural context: the thumbs-up sign in America and most Western countries means something is OK, or that you approve. This sign is interpreted as rude in many Asian and Islamic countries.
The intended goal of gestural interfaces is to provide users with an intuitive way to interact so that, ideally, no learning or training for specific gesture/action mappings is required. Nevertheless, current interactive gestural languages are defined in a laboratory setting and so, even if they can be adopted in preliminary investigation, they do not accurately define users’ behavior. This situation is similar to the one that emerged in the past when we shifted from command lines to graphical user interfaces. The main factor driving this paradigm shift was, in fact, a human factor on fostering recognition rather than recall. For most people, it is easier to recognize an icon and associate it with an action on an interface than it is to recall a command belonging to a specific language that had to be learned. Of course, this is made possible by the display interface, just as touchscreens open new possibilities for human-computer interaction.
The definition of an affordance language (in terms of visible hints) represents a possible solution to this problem, especially for interactive tables used in public spaces, where users ought to learn rapidly how to interact with the system. Another possibility is to investigate the participation of nontechnical users in the design of gestural languages.5 By actively involving end users in the definition of gesture sets (participatory design), we can aim at selecting gestures with the higher consensus. Nevertheless, we should take into account that gestures (as signs) are something alive that change depending on cultural and human aspects, time, and context. Therefore we envision a further step to address natural interaction, which is users’ personalization. New devices able to recognize the user can provide gesture personalization mechanisms based on user’s preferences. A broader level of personalization can take into account communities more than single users. For example, a gestural interface designed in a certain cultural context can evolve to address the diversity and cultural background belonging to different communities of users.
New gestural interfaces could also analyze and automatically employ users’ unconscious movements during interaction. Unconscious movements can be considered the most natural ones, as they represent the honest signaling that happens without us thinking about it. For example, moving our face closer to a book for magnifying the text can be considered an unconscious action for zooming, which is more natural than any hand gesture. Clearly, considering diversity at the levels of both cultural and unconscious gestures raises many new challenges. For instance, performing experimental evaluations for validating gestures in multicultural and multidisciplinary environments instead of classic controlled experiments in laboratory settings. Another challenge can consist of augmenting the surrounding environments, not only for recognizing gestures but also facial expressions and body movements (for example, by utilizing Microsoft Kinect).
Conclusion
In our opinion, the road to natural interfaces is still long and we are now witnessing an artificial naturality. These interfaces are natural, in the sense they employ hand gestures, but they are also artificial, because the system designer imposes the set of gestures. The main aim of natural interfaces should be to break down the technology-driven approach to interaction and provide users with gestures they are more used to, taking into account their habits, backgrounds, and cultural aspects. Perhaps the goal is unobtainable—as we use technology in our everyday life this becomes as “natural” as using the hand to scratch an itch.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment