Research and Advances
Artificial Intelligence and Machine Learning How the virtual inspires the real

How the Virtual Inspires the Real

The physical world increasingly approximates the virtual world of computer graphics.
Posted
  1. Article
  2. Author
  3. Figures

What is reality? What is fake?

Figure. The Green Goblin, Spider-Man’s arch rival (© 2002 Sony Pictures Entertainment, Columbia Pictures, Marvel Characters, Inc.)

More and more, physical reality looks like it jumped off a computer screen, imitating computer-generated models and simulations. Even our bodies are beginning to reflect the genetic engineering and virtual anatomy simulations first realized on a screen. The real and the virtual are even beginning to blend together through augmented reality interfaces. We may someday walk down a street where virtual road signs tell us about the structures we are passing. We may routinely sculpt virtual objects as if they had the physical characteristics of stone or steel or wood. The articles here reveal how the magic window of the computer screen allows us to not only imagine practically anything but actually construct the fluid forms of our future reality.

W. Daniel Hillis explores how computer graphics not only help engineers and artists shape our artifacts but our understanding of what is true about the physical world. In the same way artifacts reflect the hands of the people making them, for Hillis, product manufacturing today is often just another form of rendering. He adds that we have never had a way to see the vast, as well as the minute, dimensions of our universe with an instrument other than a computer. The graphics programmer, Hillis writes, is not just a lens maker but an interpreter of our shared reality.

The commercial viability and increasing population of simulated actors in feature films may not exactly herald the end of their flesh-and-blood counterparts, as some actors fear, but they can certainly expect changes in their on-screen appearance. Alvy Ray Smith calculates perhaps another 20 years until the first completely digital “live-action” motion picture, including fully realized human beings replacing the appearance of a lead actor. In the meantime, count on increasingly prominent cameos. Smith adds, however, such films are likely to need 10,000 times more computing power than a typical movie budget today.

When might scientists trust the scientific validity of visualizations derived from the technology behind computer games? Will it be possible for, say, a surgeon to perform some critical procedure, like remote liver-transplant surgery, on an interface that owes its navigational flexibility and sense of realism to a game console? Meanwhile, writes Theresa-Marie Rhyne, visualizations and scientifically reliable images will spread beyond the lab and into our pockets in the form of games, cell phones, and PDAs as part of the visualization community finding its place in a new computer graphics universe. That universe is increasingly charted by developers of games and wireless, handheld mobile computing devices.


We may routinely sculpt virtual objects as if they had the physical characteristics of stone or steel or wood.


Surgery simulations and video games, write Clemens Wagner et al., share a quest for realistic object behavior and high-quality images. What they do not share is complexity of object behavior; biological tissue behaves in far more complex ways than any car on any simulated game race track. That’s also why the virtual tissue simulation in their EyeSi surgical simulation system uses descriptive models; the criterion is not physical correctness but a convincingly plausible behavior of the emerging deformation. As a training tool, EyeSi immerses a surgeon in an environment of real surgical instruments, a mechanical eye with virtual tissue deformations, interactive stereo computer graphics, and interactions performed with imperceptible delay.

Generating 3D models from 2D images is becoming a routine form of model building. It can even be performed automatically, write Marc Pollefeys and Luc Van Gool, by a computer using a combination of algorithms developed in computer vision, photogrammetry, and computer graphics. They describe their technology for automatically generating realistic 3D models from a sequence of images acquired with a handheld camera. Applications already exist in such diverse disciplines as archaeology, architecture, forensics, geology, planetary exploration, and movie special effects.

Norman Badler, who has been creating virtual humans for years, has now put them to work maintaining complex physical systems, including military aircraft, by simulating assembly, repair, and maintenance functions in a 3D virtual environment. Jokes abound, Badler et al. write, concerning unintelligible instructions. If the instructions given to the virtual humans (substituting for people performing under difficult or dangerous conditions), such as selecting tools and sequencing subactions, are incorrect or the design is flawed, they report failures. A procedure is valid if no failures occur across a sufficiently large range of anthropometric body sizes.

Finally, Mark Billinghurst and Hirokazu Kato show how collaborative augmented reality interfaces are beginning to render the real and the virtual indistinguishable. These interfaces let users see each other, along with virtual objects, allowing communication behaviors much more like face-to-face collaboration than any screen-based collaboration, as when soldiers on a battlefield, each equipped with a personal head-up display, discuss the targeting information they see overlaid on the world around them.

How should we respond to these virtual aspects of our real reality, especially as it becomes difficult to distinguish between virtual and real? Will we reach a limit as to how much “unreality” we want? For games and entertainment, No. Can a simulation or model ever be too much like the real thing? Again, No, especially for engineers and scientists, for whom understanding real reality is critical. I’m reminded of the comment by mathematician Richard W. Hamming (1915–1998) that the purpose of computers is insight, not numbers.

How about our children loving their robot pets as substitutes for real pets or people? That’s certainly one to watch.

Figure. West Africa dust storm; data collected Feb. 2000, image completed May 2001 (NASA/Goddard Space Flight Center Scientific Visualization Studio, Greenbelt, MD, the Sea-Viewing Wide Field-of-View Sensor project, and Orbital Imaging Corp.; Stuart A. Snodgrass, animator; Gene Feldman, scientist).

Back to Top

Back to Top

Figures

UF1 Figure. The Green Goblin, Spider-Man’s arch rival (© 2002 Sony Pictures Entertainment, Columbia Pictures, Marvel Characters, Inc.)

UF2 Figure. West Africa dust storm; data collected Feb. 2000, image completed May 2001 (NASA/Goddard Space Flight Center Scientific Visualization Studio, Greenbelt, MD, the Sea-Viewing Wide Field-of-View Sensor project, and Orbital Imaging Corp.; Stuart A. Snodgrass, animator; Gene Feldman, scientist).

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More