In a dark room at the North American Design Center of General Motors (GM), a discussion is going on around a vehicle concept. Opposite the reviewers, a detailed, full-size car spins on a turntable on a sun-lit patio. At the press of a button, the patio changes to a winter scene with the bare trees reflected in the car’s surface. Next, the car stands alongside the current vehicle brand line, or semitransparently overlays its predecessor to show the design evolution.
Operations that were formerly impractical or expensive (such as reviewing vehicle mock-ups outdoors) are becoming commonplace in the automotive industry through the use of virtual environments. In production use at GM since 1995, these venues have become collaborative meeting rooms, and are now linking global working groups together electronically via a shared virtual model. Here, I describe the use of single and networked collaborative virtual environments for design review, including lessons learned in group communication, visual perception, and remote control of model and viewpoints.
The collaboration among artists and engineers, as well as others in a cross-functional vehicle development team, requires an unambiguous language—the current lingua franca of physical models. For exterior aesthetics, designers rely on 2D sketches (electronic or otherwise) for their fluidity and expressiveness. To move from design intent to product involves the transformation of 2D artwork to 3D shapes, and the negotiated convergence of aesthetic concepts with engineering and manufacturing realities. Detailed physical models continue to play a central role in both these stages because no other representation is good enough to replace them. Yet, electronic alternatives are used because of the production time and expense for physical models and the inability to share them across a global enterprise. Virtual reality (VR) helps make these electronic models appear more real. Our research goal is to create and advance the usage of realistic virtual models, smoothly paving the way to a digital enterprise in the future.
The original symbols of VR—the glove and head-mounted display (HMD)—are not usually seen in the automotive industry [1]. Instead, Immersive Projection Technology (IPT) display systems have become popular. An IPT display is composed of a number of stereoscopic projection screens, often configured as a Wall or CAVE [2] as shown in Figure 1. The former is for “outside, looking in,” and the latter for “inside, looking out;” in this context, for viewing car exteriors and interiors respectively.
Stereo glasses, a 3D head-tracking device (to determine the correct perspective viewpoints), a 3D pointing device, and audio speakers complete the basic hardware interface. It, together with a computer and software to render a stored model, comprise the display system. For our application, a real bucket seat and steering wheel—coordinated spatially with the virtual vehicle interior in the CAVE—helps drivers adjust their body positions and feel more like they are in a car, rather than in a theater seat.
Kicking the Tires
GM Research and Development has helped develop and later disseminate IPT systems globally throughout the corporation since 1992 [9]. While the examples discussed here are necessarily specific to our experience and application, it is hoped this distillation will be useful for other immersive applications.
Communicate visually. Surprisingly, many engineers consider visualization just a check. CAD data, which has to exist prior to many kinds of visualization, can be used to calculate vehicle dimensions, packaging volumes, human reach curves, and sight lines. Why visualize? Visual exploration provides discovery of new problems as well as validation of expectations. Aspects of the design, for example, aesthetics, or even a sense of interior roominess are subjective perceptions, and are not directly calculable. Communication of the concept and experience is the key.
Minimize hardware intrusion. It is difficult to communicate with the person standing next to you if you are wearing an HMD. IPT displays are preferred, but even with them, the need to wear stereo glasses is annoying, enough so that people will dispense with stereo viewing altogether if it is of minimal value for a particular task.
Display the model full size. A beautifully drawn car shown on a stereoscopic, head-tracked 19-in. monitor is a novelty that generates little real enthusiasm. The difference between it and the same model shown full-size on a 18-ft. wide Wall was put succinctly by one manager: “That’s a model; this is a car!” The psychological impact is enormous. But there are also practical and perceptual reasons that motivate a 1:1 scale.
- It’s easier to have a group meeting in front of a big display than around a monitor.
- A full-size display promotes natural interaction with a human-scale, familiar object. You can walk up to a virtual car and peer inside, and simulate opening the door by reaching for it with your tracked hand. When sitting inside, reach and visibility issues can be experienced first-person.
- People estimate vertical model size correctly with respect to their eye height above the ground plane in real scenes and immersive displays, but not in small picture or monitor displays [5]. A Wall screen depicting a car needs to extend to the physical floor, so the vehicle and viewer ground planes flow together.
- Scale models, even physical ones, don’t generate correct shape expectations for the full-size model. Designers often remark that a full-size car does not look the same as the physical scale model they are accustomed to viewing daily. Full-size virtual models can help eliminate such false expectations.
Figure 3. A virtual meeting in a virtual car.
Operations that were formerly impractical or expensive are becoming commonplace in the automotive industry through the use of virtual environments.
Branching Out
Networked virtual environments [68, 12] share data and control. GM has multiple sites connected by the company intranet. Four of these contain a CAVE and a Wall in the same room so that virtual vehicle interiors and exteriors can be viewed together if desired. The principal data structure in these display systems is the scene graph, which holds the parameters and relationships among the objects to be rendered graphically. With a programming interface allowing dynamic modification of the data over a network, the scene graph can be shared in several ways: Among applications and the display system; among two or more display systems in the same room; and among geographically distributed display systems.
Live editing. Live links. A scene graph shared among applications (Figure 2a) can hold varied data—for instance, aesthetic surfaces, and engineering simulation data (thermal flow, airbag deployment). A specific plug-in for each application can link data to corresponding entities in the scene graph over a network, so some parameters can be altered in real time [10] during a design meeting. Even though 3D interactive tools are available in the virtual environment, remote application control of the scene graph has some advantages:
- People are trained on the application user interface, making it easier to modify particular data.
- Data editing tools (for example, material editors) requiring fine control or numeric input are easier to use in a traditional 2D mouse/keyboard interface than in an immersive 3D display (in our experience).
- Changes stay in the native databases of the applications where they belong.
- Helpers can control or guide the action remotely. The reviewers don’t usually take direct control.
See what I see? Besides being linked to application databases, scene graphs in multiple display systems can be linked to each other, item by item (Figure 2b). If an item is modified, all displays are updated. Each display system’s viewpoint of a shared model can be independent or controlled. The view of a vehicle interior seen from inside a CAVE can control the view on a Wall of the same interior, making it simpler for many people in front of the Wall to experience what the driver sees through his eyes. The displays can be in the same room, or globally distributed; in the latter case, verbal communication is implemented with speaker phones, although experiments with digital audio have been conducted [4].
See what I am doing? Full-body avatars are available to represent remote participants. However, GM’s current design reviews are prestructured, involve at most two sites at a time today, and focus on the shared model, not the participants. In this constrained context, full-body avatars are not necessary, and are unused. A virtual arrow pointer to indicate areas of interest in the model is the only representation of the remote participant other than voice requested.
We have experimented with less constrained design review scenarios in which awareness [3] of the other participants is more important. In particular, avatars were used to illustrate various actions initiated by the remote participant [10, 11]. For example, reaching for the glove box can be illustrated to another viewer by an avatar with predefined animation—from motion capture, or an accurate simulation. The real participant triggers his remote avatar animation by simply reaching with a tracked hand for the glove box. Only the triggering command needs to be sent across the network, rather than real-time tracking information or changes in all of the avatar’s joints, since the animation is done locally.
Down the Road
The future of collaborative design depends on more technological issues than the ones described here. For one, database management issues can be thorny in any enterprise, and are all the more challenging for globally distributed data, particularly when we envision pieces of the design coming from different worldwide studios, and even different vendors, just in time for a joint effort. But what will really determine success or failure is whether the technology can be made invisible—that is, not only suited to the task, but unobtrusive.
Today, latencies in communication networks make distributed face-to-virtual-face meetings feel unnatural, and make fine-grained interaction difficult. There are additional problems within a single IPT site. Most IPT displays have only one correct viewpoint, with all but one viewer seeing the shared model distorted to some degree. Even with the correct viewpoint, individual perceptions can differ considerably for reasons not understood. Replacing physical reality with an illusion will require significant additional research, but even limited successes can have a huge impact.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment