Computer and video games are engaging because they provide increasingly realistic and lifelike 3D visual environments, thus driving demand for home 3D entertainment.1 But non-entertainment applications are also using more and more immersive 3D graphics, along with game engines, physics, architectures, and development methods, while their developers ponder ways to satisfy the pumped-up expectations of users who grew up playing games.
Even game-playing children realize, as the authors here document, that the driving force in these immersive environments is the user’s experience.
William Swartout and Michael van Lent identify two general classes of non-entertainment applications that benefit most from a game-style design approach: One is experience-based systems used in, say, training simulations for U.S. Army officers (When would it be wise to talk your way out of trouble?) and behavior modification (such as to treat phobias and post-traumatic stress disorder). The second involves constructing testbeds for emerging technologies in a relatively rich environment before they are ready for the real world. For applications intended to teach users through realistic experience, they write, game design techniques make the experience much more memorable.
Mary C. Whitton writes that all applications involving compelling virtual environments depend on high-quality sensory immersion as well as on users being eager for the experience. Some are so effective at creating the illusion of reality, she says, that they are gut-wrenchingly compelling. The more they immerse users in stimuli corresponding to their expectations of the virtual world, the more compelling is their experience.
Ramesh Jain argues that decision makers need insights that can come only from their own experience and experimentation with available data sources. In experiential environments, users apply their senses directly to address information related to an event (such as a business transaction or even a college football game). These environments also promise to bring computing to billions of novice users worldwide, even the illiterate, by providing relatively language-free interfaces. Jain warns, however, that developing interfaces for experiential environments without including new data assimilation, indexing, and management techniques will not be scalable or efficient.
Michael Tsang et al. illustrate the notion that real-time response and interactive storylines can be turned into practical functions in non-entertainment systems. They describe two such systems: the Boom Chameleon for evaluating virtual models and StyleCam for developing online product marketing and advertising. Both involve inspection of virtual 3D objects but take different approaches to giving users an engaging experience. The Boom Chameleon allows users to view virtual objects (such as prototype automobile designs) while keeping the interaction rooted in the physical world. Interaction is thus similar to games and simulations that use easy-to-understand metaphors for navigation (such as “walking”). StyleCam allows authors to create a game-like experience using interactive narrative (such as for creating virtual 3D automobile sales brochures). It can be configured so when users experience a particular set of animations, another set automatically becomes accessible. As in many games, once users achieve one goal, they gain access to an entirely new, perhaps more challenging, level of play.
Figure. Emotive virtual actors.
Joseph A. Paradiso writes that large interactive surfaces built into store windows, museum exhibits, and other communal spaces allow casual information browsing through physically knocking on the glass. The system he describes locates the position of knocks and taps across a large sheet of glass using the differential time that bending waves arrive at four locations to transform single-pane windows (common urban features) into large tracking surfaces. Though the related applications all involve close-up interaction with large dynamic displays, the system is appropriate for other niches, he writes, including selecting objects placed behind glass partitions. This would enable, say, interactive museum cases, where knocking near an object would bring up text, images, audio, or video bearing related information.
Carlo Tomasi et al. describe a full-size projection keyboard they’re developing for cell phones, PDAs, and other portable devices. Made entirely of light, it projects onto practically any flat surface yet functions like its mechanical counterpart. Full-size, full-function projection overcomes the problem of miniature displays and keyboards too small to be comfortably keyed or viewed. It also overcomes the need to store or fold away the keyboard; users simply switch it off. They now face at least two fundamental physical-interaction design challenges: enabling users to type on any surface, including directly on their laps; and enabling users to type in midair, thus obviating the need for a typing surface altogether.
Whether they’re used as virtual health-care attendants or as animatronic characters in theme parks, sociable robots, write Cynthia Breazeal et al., need to perceive the behavior of humans through vision, sound, and touch. If they could, faces, gesture, and speech—our natural social interfaces—could replace electronic interface devices, including joysticks and game pads. While machine perception continues to be a daunting problem, the interactive robot theatre described here engages its human audience in just this way while following a loosely constrained storyline. The Public Anemone and its fellow terrarium-bound autonomous characters are capable of physical interaction with the environment and social interaction with human passersby. Future robotic characters, they say, may find their way to Broadway, performing with human actors on an intelligent stage.
Finally, programmers, too, can benefit from a sensory-based game-like experience. Focusing on the fact that auditory interrupts are difficult to ignore, Paul Vickers and James L. Alty describe CAITLIN, their musical program auralization system for rendering the runtime behavior of programs into structured musical frameworks. Programmers hear when their code deviates from the expected path, thus helping identify and eradicate bugs. Though designed for Pascal, its principles can be applied to other languages, including C and Java. Music seems particularly useful in two areas: when the program’s output contains no clues of a bug’s location and when the program contains complex Boolean expressions.
Even game-playing children realize, as the authors here document, that the driving force in these immersive, realistic environments is the user’s experience, not merely a specification. They, and adults, too, certainly like the idea of interactive everything. Despite the civilization-bashing content of many commercial games, that experience may help teach them how to choose the options most likely to ensure their (virtual) survival, as well as how to complete the tasks a system assigns them. We can also hope the experience imparts the lessons of software engineering, logic, and user perception, as well as insight about the nature of human society.
Figures
Figure. Emotive virtual actors. These screenshots are from an embodied virtual-actor software system in which actor-agents have the ability to interactively vary their body language to convey personality and changing mood. The agents were not so much animated as directed to play this scene. You can interact with them at mrl.nyu.edu/perlin/ (software and content by Ken Perlin, Media Research Laboratory, New York University, 2003).
Join the Discussion (0)
Become a Member or Sign In to Post a Comment