Sign In

Communications of the ACM

News

Looking Beyond Stereoscopic 3D's Revival


dynamic lenses

Custom-designed dynamic lenses constructed with birefringent material.

Stereoscopic 3D is experiencing a strong resurgence, with moviemakers no longer using the technique primarily as a gimmicky audience-draw consisting of objects poking from the screen into the theater space. In today's cinema, stereoscopic 3D is being used more subtly as an aspect of storytelling to enhance immersion into environments that appear to invite the viewer inside. The film Avatar is a testament to this shift in how moviemakers now use stereoscopic 3D, and yet the movie industry is not alone in embracing the technique.

Television manufacturers and broadcasters have fallen under the spell of the third dimension, with stereoscopic 3D TVs and Blu-ray players now widely available, and new 3D products expected this year from major manufacturers such as LG, Panasonic, and Sony. ESPN and other broadcasters are rolling out dedicated 3D cable channels. Also, the market for stereoscopic 3D computers is expected to grow rapidly, with one million units shipped this year and 75 million by 2014, according to Jon Peddie Research (although most of these computers will be stereoscopic 3D-capable due to their graphics processors, they'll still require a special monitor, glasses, and content). And mobile device-makers have begun to incorporate 3D technology into their handhelds, with the most recent example being the Samsung SCH-W960, a smartphone designed to convert 2D content automatically into stereoscopic 3D.

While the stereoscopic 3D resurgence continues to have a powerful impact on consumer culture, distinct challenges remain. Researchers working in this area—a field that draws on vision science, display technology, visualization, and cognitive science—are attempting to develop new techniques to overcome the limitations associated with traditional stereoscopic 3D strategies, many of which have remained unchanged since the 19th century. New research has found, for example, specific physiological reasons for the visual fatigue that viewing stereoscopic 3D media sometimes causes. And while the technology for creating such media has become more sophisticated, the content remains costly to produce and cumbersome to consume, requiring special cameras, projectors, and glasses.


Kurt Akeley is experimenting with an approach related to light-field theory in which the display is replaced with a volumetric light source so light comes directly from the simulated distance.


Kurt Akeley, a principal researcher at Microsoft Research Silicon Valley, says that while stereoscopic 3D techniques and technologies are growing more sophisticated, they remain far from mature. "I enjoyed viewing Avatar, and I experienced no discomfort during the three-hour showing, which is a big improvement over previous cinematic experiences," says Akeley, who cofounded Silicon Graphics and led the development of OpenGL. "But many people I've spoken with did experience discomfort, or were annoyed by certain cinematic techniques, such as the limited depth of field in many scenes."

There are several kinds of depth cues that researchers working in this area are actively studying to improve such stereoscopic 3D experiences. For example, one kind of cue is motion parallax, which conveys depth through apparent object movement. When looking out the side window of a moving vehicle, for instance, objects beside the road appear to move past the window more quickly than objects in the distance. Currently, while movies can render parallax for camera motion correctly, they cannot create parallax to account for a viewer's head movement. After all, everybody in an audience sees the same image on the screen, despite head movement and regardless of seat position in the theater.

While it might not be difficult to imagine movie theaters one day tracking head movement to render viewer-based motion parallax correctly, several fundamental depth-cue issues have yet to be resolved. One of these issues is how the human brain perceives simulated 3D differently from how it perceives the natural world. In the physical world, the distance at which each eye's line of sight must converge, called the vergence distance, and the distance at which the eyes must focus, called the focus distance, are the same. Converging the eyes drives focus to a nearer distance, while focusing to a nearer distance drives the eyes to converge, which means that vergence distance and focus distance are coupled in the brain.

Stereoscopic 3D media requires that viewers fix their eyes at simulated distances but still focus on the display's fixed distance. This disparity causes a physiological disconnect that can lead to headaches and even nausea. To address this issue, Akeley has been experimenting with an approach related to light-field theory, which he says has the potential to lead to new strategies for dealing with this disparity. The idea is to replace the display with a volumetric light source so light comes directly from the simulated distance, essentially eliminating the gap between vergence distance and focus distance.

Despite the promise of using light-field theory to make stereoscopic 3D more comfortable for viewers, the idea has proven to be difficult to implement in practical applications outside the lab. The prototype systems are mainly used to help understand human perception and the effects of forcing users to focus at one distance while looking at an object at a different, simulated distance. Still, Akeley remains optimistic about such research. "I'm hopeful that this virtuous cycle of researchers using industry-created equipment to probe human visual mechanisms and create useful feedback for industry will accelerate as stereoscopic viewing becomes the standard," he says.

Back to Top

Understanding Depth Cues

Another researcher focused on depth cues in stereoscopic 3D is Martin Banks, a professor of vision science at the University of California, Berkeley. Banks has conducted widely cited studies showing how this conflict between fixed display depth and vergence distance causes visual discomfort. "We think this is potentially a serious problem with the distribution of stereoscopic media, particularly when the viewer's distance is likely to be short, as with small TV screens viewed at a short distance," he says. "We still have lots to learn about how stereoscopic signals affect how people perceive things."

Banks is currently working on how the presentation of information over time affects the perception of motion and depth cues. In stereoscopic 3D cinema, images are presented to the left and right eye at 72 cycles per second. While the images are presented in counter-phase to the two eyes, each image is shown three times before it is updated. The update rate is only 24 cycles per second, a coarse approximation of what it would be in the natural world. Banks is studying how the visual system can tolerate such slow updates and how viewers perceive such signals to be smooth and convincing.

"For these studies, it would be useful to have faster display technology than we currently have," Banks says. "With such technology, we would be able to better understand the consequences of using different temporal protocols in the presentation of stereoscopic video."

In a related project, Banks is studying how blur affects the perception of distance and size. Conventional optical devices, such as eyes and cameras, can be focused only on one distance at a time, which makes objects blurry when they are farther from or nearer to the focus distance. Banks is conducting studies to determine the relationship between depth-of-field blur and other depth cues, but also to understand how changes in that relationship affect human perception. The results of such investigations could influence the design of content for stereoscopic 3D cinema and television.

Banks predicts that, despite the abundance of unanswered questions about how human perception works with simulated 3D, the technique will continue its momentum in moviemaking. He also predicts that display update rates will improve to the point where motion looks truly smooth, and vision sciences will continue to improve colors so they look more like the natural world. "We're a long way from achieving these goals," he says. "But once we do, the experience of watching video will be truly breathtaking."

Microsoft Research's Akeley, for his part, predicts future 3D displays not only will eliminate the need to wear special glasses, but also will have the ability to track head movement to render motion parallax accurately. And he predicts the proliferation of more powerful content-creation technologies, such as movie-production systems that can render a scene from different viewpoints without reshooting it, and an overall better understanding of vision fatigue related to focus and depth cues. With these and other technological advances, 3D viewing experiences will be greatly improved, whether they occur on big screens or small ones.

* Further Reading

Akeley, K., Watt, S.J., Girshick, A.R., and Banks, M.S.
A stereo display prototype with multiple focal distances. ACM Transactions on Graphics 23, 3, August 2004.

Hoffman, D.M., Girshick, A.R., Akeley, K., and Banks, M.S.
Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. Journal of Vision 8, 3, March 28, 2008.

Love, G.D., Hoffman, D.M., Hands, P.J.W., Gao, J., Kirby, A.K., and Banks, M.S.
High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express 17, 18, August 2009.

Mendiburu, B.
3D Movie Making: Stereoscopic Digital Cinema from Script to Screen. Focal Press, Burlington, MA, 2009.

Watt, S.J., Akeley, K., Ernst, M.O., and Banks, M.S.
Focus cues affect perceived depth. Journal of Vision 5, 10, December 15, 2005.

Back to Top

Author

Kirk L. Kroeker is a freelance editor and writer specializing in science and technology.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1787234.1787241

Back to Top

Figures

F1Figure 1. a and b: A pair of custom-designed dynamic lenses constructed with birefringent material. The lenses are used to create a volumetric stereoscopic 3D display with four apparent image depths. Rendering illuminates pixels in inverse proportion to their distance from the simulated distance, creating a seamless sense of depth.

F2Figure 2. In the natural world, focus distance (the distance to which the eyes must focus to make an image sharp) and vergence distance (the distance at which the eyes' lines of sight converge on an object) are the same. However, most stereoscopic 3D displays require viewers to point their eyes at simulated distances while still focusing on the display's actual fixed distance. This incongruity can cause headaches and even nausea.

Back to top


©2010 ACM  0001-0782/10/0800  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


 

No entries found