Sign In

Communications of the ACM

From the president

Virtual Reality Redux

Vinton G. Cerf

ACM Past President and Google Inc. Vice President and Chief Internet Evangelist Vinton G. Cerf

I have just returned from a trip to Warsaw where I had the opportunity to visit the Copernicus Science Museum. It was filled with young people racing from one interactive exhibit to another. For those who may be familiar with the Exploratorium in San Francisco (recently relocated to the Embarcadero), think of the Copernicus Museum as the Exploratorium on steroids. The facility houses an amazing array of interactive exhibits ranging from humaniform robots to an olfactory laboratory that presented hours if not days of opportunity for visitors to explore the real world through carefully crafted displays. Among the most unusual was a laboratory that provided a collection of distilled essences that, when combined, produced pleasant to pungent to downright smelly effects. It included essences from plants and flowers to the anal glands of animals like beavers and civets. (I wondered how these particular essences might have been discovered...).

What does all this have to do with virtual reality?

As I encountered these fascinating experiences drawn from the physical world, I thought about what we have been able to achieve in the virtual world of computing. We create our own realities in this space. We can explore in a simulated way universes that bear little relation to our real world. We can change fundamental physical constants to observe the effects. We can design systems that could work in environments that might exist only in our imagination or might exist only in the hearts of stars. The computable universe is in some sense even larger and diverse than the real one, unless, perhaps, you subscribe to the infinite universe theory.

Just as neural structures in the brain deal only with electro-chemical actions, computers deal with binary bits. The neurons of the brain do not distinguish between input signals coming from ears, nose, eyes, tongue, or fingers. All the senses end up being represented with the same kinds of electro-chemical signals. Unsurprisingly, it is now thought that the same neural structures are used to detect and analyze sensory patterns, regardless of their origin. Computers end up processing binary encoded signals (possibly passing through analog/digital converters) and in both directions. That is, computers receive and interpret incoming digital signals and generate outgoing digital signals, regardless of the ultimate way in which these signals are rendered.

Whether we are typing on a keyboard, fingering a touch pad, or speaking, these media become digital signals suitable for processing. In the other direction, digital signals may be transduced to drive a variety of output media. The modes through which we interact with computers have been evolving toward ever-richer alternatives. The remarkable Microsoft Kinect device is an example in which gestures of all kinds become a new vocabulary through which to communicate with computers. By the same token, output media are growing richer. The worlds of imagination will be rendered by increasingly diverse means, including, one supposes, three-dimensional display technology and so-called "3-D printers." In fact, there is no limit to the potential variety of output media one could imagine. Bone conduction devices cause sound to "materialize" inside the cochlea—as does the speech processor used with cochlear implants. One can begin to imagine other mechanisms that go well beyond today's Google Glass toward direct neuro-electric stimulation of the retina or the optical nerve.

Speculations like this lead one to imagine that Asimov's "visi-sonor" may not be as far-fetched as it seemed when he wrote the Mule in his famous Foundation Trilogy. It also makes one wonder about the potential inherent in today's CAVE display rooms. Perhaps the Star Trek holodeck is not as far in the future as it might seem at first glance. There are already many applications that can process the massive amounts of data taken by magnetic resonance imaging and present the results in three-dimensional format. At the same time, one can readily imagine registering and calibrating analyzed and simulated results overlaid on real images—the classic definition of augmented reality that is already demonstrable. Applications that overlay languages and currency translations over images of menus in the restaurant already exist. Barcode scanners overlay database information on the image of jars of food.

It seems irresistible to predict and inescapable to imagine that in the future we will see broader and more diverse ways in which to render computer output or to capture computer input. Perhaps a 21st-century Descartes will be heard to say, "I think, therefore it is!"


Copyright held by Author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


No entries found