Imagine running, flying, or slithering through Quake’s dank corridors, threatening skies, and twilight courtyards. Or striking off in new directions in whatever scientific visualization you were exploring. Who hasn’t wanted to peel back the walls or lift the roof off virtual structures to reveal hidden passageways, mechanisms, and secrets? Once inside you might also hope to gain a deeper understanding of physical, biological, or cosmological reality. You might even learn to control the processes that brought that reality into existence. Figure
Even those of us outside the scientific community are about to have the opportunity to experience not only game worlds in this way but virtually explore, say, strands of human DNA, fluid flow in a human heart, metastasizing cancer cells, the structure of a mouse cerebellum, or a computer-aided design model of an airplane or office building.
3D virtual imagery is increasingly being adapted into all-encompassing visual and haptic (force feedback) interfaces, as the articles in this special section attest. Hybrid environments, where real and virtual objects and rich user sensation coexist in the same overlapping space, represent a big step beyond both purely virtual visualizations and augmented reality environments where virtual images are superimposed on real objects. Flight simulators are a perfect hybrid example, putting real cockpit and flight controls in the user’s hands and computer-generated sky, weather, and runways in the background.
Users, also described here as participants, get to look around, over, and under the objects being displayed. These objects, including tools and parts, feel real, have mass, and handle in the usual way, with appropriate visual and haptic feedback. Experiencing the sensations of being in a familiar physical environment, some participants are likely to wonder where the virtual ends and the real begins, especially when they simultaneously working with real and virtual things.
Notable emerging applications include designing medical implants, painting virtual images, training for construction procedures, treating phobias, exploring scientific models, viewing the activity within multiplayer games, and delivering telepresence and teleimmersion (as in, say, a trip to the wreck of the Titanic or a proposed robotic repair mission to the Hubble Space Telescope). The physical scale of such projects ranges from the infinitesimal to the huge: from microscopic nanomachines, to molecular structures, to biological functions, to a virtual canvas, to space shuttle payloads.
Scharver et al. explain how medical sculptors and neurosurgeons, using a hybrid display called the Personal Augmented Reality Immersive System, are beginning to create custom-fit cranial implant models for patients with severely injured heads. These 3D models are based on the patients’ computed tomography data, which the system superimposes over the sculptors’ hands. The sculptors feel as if they’re handling physical models and real tools. Building virtual material into the defect model, they gradually shape the implant’s virtual geometry to fit a patient’s critical physical needs.
Objects, including tools and parts, feel real, have mass, and handle in the usual way, with appropriate visual and haptic feedback.
The physically based virtual painting system and interactive paint model created and presented here by Lin et al. harness the illusion of physical interaction with paints, brushes, surfaces, color, texture, and light, enabling users to express their visual and emotional imaginations within the digital equivalent of a traditional painter’s studio. Future research will look to capture the paint strokes of master artists via the haptic interface, then use them to train novices.
Benjamin Lok wants to inspire interest in hybrid environments as an alternative to their purely virtual counterparts for tasks requiring high-fidelity manual interaction (such as designing space shuttle payload components). Working with engineers at NASA’s Langley Research Center, Lok built a system in which participants see tools and parts, as well as themselves, while using their hands and bodies to simultaneously manipulate both real and virtual objects. The heart of the system is a way to detect collisions between the objects, then provide physically plausible responses to the collisions.
Houston et al. propose a way to separate the roof, floors, and other surface layers of architectural models, including those in multiplayer games, notably Quake III: Arena, to reveal their internal 3D structures, along with their inhabitants. Though it appears to function like the Marauder’s Map in J.K. Rowling’s Harry Potter and the Prisoner of Azkaban, their interactive system—called ArchSplit—provides automated support for generating exploded views of architectural environments, including many developed in OpenGL, without modification or recompilation.
For online publication design to be as effective as its paper counterpart, Jacobs et al. propose using grid-based design principles to automatically adapt content to appealing page layouts, no matter what size display the reader is using. One of the greatest impediments to achieving the decades-old vision of a paperless world, they argue, may well be a “deceptively simple 2D computer graphics and user interface problem.” Their adaptive document layout system aims to automatically reformat, resize, and paginate electronic text and graphics so documents look as good on screen as they do in their original paper form.
Worried that people might soon tire of video and computer games, largely because of their interfaces, Satoru Iwata, president of game giant Nintendo, recently told the BBC that current games, with their emphasis on complexity, realism, and sophisticated control systems, were already alienating many potential gamers. He may yet find that immersion and rich sensory interaction in 3D graphical environments represent a more promising direction for all kinds of human-computer interaction, including games and scientific visualizations, where even casual users get to explore and manipulate the world around them or just start over on a blank virtual canvas. Figure
Figures
Figure. Tracking an enemy submarine in a four-wall immersive environment. The primary user (shown) uses a modified joystick (a flightstick attached to a six-degrees-of-freedom tracker). Movement involves four buttons mapped to a serial port using an Immersion IBox. Controls include large and fine translations, rotation of the space, and mode selection. The secondary user (not shown) controls the view through a wireless PC tablet running a Java Swing GUI with sliders, radio buttons, and other options for controlling various aspects of the data set, as well as lighting, shading, animation, simulation, and placement of billboards. The tablet communicates via the CORBA network protocol. (Greg S. Schmidt, Aaron Bryden, Sue-Ling Chen, Erik Tomlin, Lawrence Rosenblum, Virtual Reality Laboratory, Naval Research Laboratory, Washington, D.C.)
Figure. Created for the Musée du Quai Branly in Paris (scheduled to open in 2006), these virtual architectural environments visualize various design scenarios (such as accessibility for the handicapped, smoke detection, and alarm systems) and function as tools for curators planning future exhibitions. Decisions concerning placement of artifacts, lighting, and sound can be experienced on an immersive Barco screen. Visitors can explore the museum in real time at www.readymade.fr. (Nadir Tazdait, Readymade, Paris, France; Jean Nouvel, architect.)
Join the Discussion (0)
Become a Member or Sign In to Post a Comment