In recent months, one company after another has come out with products that appear to create holograms—but according to optics experts, most do not use true holography to create their three-dimensional (3D) effects.
“A lot of people abuse the word ‘holography,'” says James R. Fienup, Robert E. Hopkins Professor of Optics, and a professor of Electrical and Computer Engineering at the University of Rochester. “It’s kind of a catchy thing”—a quick way to evoke the futuristic coolness of this sci-fi staple—”so they call things ‘holograms’ that have nothing to do with holography.”
A notorious example is the so-called “Tupac hologram,” which stunned audiences at the 2012 Coachella music festival by appearing to show the rapper Tupac Shakur performing on stage years after he had been killed. The stunt, which became an Internet sensation, only reinforced the public’s misconception of what a hologram is. In fact, the effect didn’t use holography at all; rather, it repurposed a classic magician’s trick called Pepper’s Ghost, an illusion created through the clever use of carefully angled mirrors.
More recently, people have been using the word “holograms” for anything seen when you put on an augmented reality (AR) or virtual reality (VR) headset, says David Fattal, CEO of LEIA, an HP spinoff that has been developing a 3D display for smartphones. For example, Microsoft markets its HoloLens virtual reality headset as a form of “holographic computing,” and mainstream media typically describe the images seen through the device as “holograms,” though it is not clear what role holography plays in the technology. (Microsoft officials declined to be interviewed for this article.) Oculus, a competing product, also often gets described as holographic.
To most people, a hologram is any virtual object appearing in 3D form—even the images created using the simple stereoscopic effects seen through plastic 3D glasses. “That’s not the scientific definition,” Fattal says, adding that LEIA, too, is sometimes slammed at academic conferences for not using true holography.
True holography, in the scientific sense, refers to a process that uses wave interference effects to capture and display a three-dimensional object. The method, which goes back to the 1960s, uses two beams of coherent light, typically lasers. “You shine a laser on something, and the light scattered from that comes to your holographic sensor, and you also shine on that same sensor a beam from the same laser that hasn’t struck the object,” explains Fienup. “You interfere those two together and you capture the whole electromagnetic field.” In fact, the “holo” in holography means “whole.”
The result is a set of interference fringes on the holographic film—a pattern of dark and bright regions that, unlike a photographic image, look nothing like the original object; therefore, seeing an image resembling the original object requires reconstruction. This happens by exposing laser light through the interference pattern, which functions as a diffraction grating that splits the light in different directions.
The key to getting the whole electromagnetic field—including the impression of depth—is holography’s capture of phase information, or the degree to which the light wave from the reference beam is out of step with the wave from the object beam. “What that provides is these interesting characteristics of three-dimensionality,” says Raymond Kostuk, a professor of Electrical and Computer Engineering, and of Optical Sciences, at the University of Arizona, who is using holography to develop more efficient processes for solar energy conversion, and cheaper methods of ovarian cancer detection. By capturing both phase and amplitude (intensity) information, holography shows more than do photographs, which capture information only about the intensity of the light.
Much of this process is now often done computationally, using CCD or CMOS cameras and algorithmic reconstruction. “Instead of recording on film, you record on the CCD camera, and then you store the information on a computer as a matrix,” explains Partha Banerjee, a professor of Electrical and Computer Engineering, and of Electro-optics, at the University of Dayton. To reconstruct the image, you process that matrix using well-known diffraction equations, which model how light waves propagate from one place to another—from the original object to the light sensor. “That’s digital holography,” says Banerjee, who was also general chair of this year’s Digital Holography and 3-D Imaging Conference, and who has used holography to capture the shape of raindrops or ice particles as they strike airplanes, to determine the three-dimensional characteristics of dents created from such impact.
One of the most popular applications of digital holography these days, says Banerjee and other experts, is in digital holographic microscopy (DHM), which aims at getting precise pictures of microscopic objects, particularly living cells and tiny industrial components such as the ever-shrinking transistors printed on silicon wafers.
For example, Laura Waller, a professor of computer science and electrical engineering at the University of California, Berkeley, runs a Computational Imaging Lab that designs DHM tools for biological imaging, creating hardware and software simultaneously. “We’ve carefully designed our optical system so we’re getting enough information about the phase into our measurement,” she says, “and because we know the wave-optical physics model of the microscope, we can throw [the data we capture] into a non-linear, non-convex optimization problem so we can solve for the phase from these measurements.”
Living cells are completely transparent, but they are thick enough to delay the phase of a light beam; by measuring phase delays, researchers can map the shapes and densities of cells.
Using phase delays to make transparent specimens visible is not new—Frits Zernike earned the Nobel Prize in physics for similar work back in 1953—but traditional phase-contrast microscopy has drawbacks that DHM can overcome. “The Zernike phase-contrast microscope is a way of seeing those two things—the variations in thickness and the variations in density—but it’s not quantitative,” says Fienup. “It turns these phase variations into light and dark patterns, but you can’t tell exactly how much phase there was, how much thicker was it, or how much thinner was it—but with digital holography, you can actually measure the density and thickness.”
Phase-contrast microscopes are also complicated pieces of machinery typically costing thousands of dollars. The DHM systems in Waller’s lab are more than an order of magnitude less expensive, she says; “they’re dirt-cheap, easy to use, and don’t have any special requirements. Then we use the computation to take on the burden that’s caused by doing that.”
Speed is of the essence when imaging biological samples. “We have about a half-second before the cells start moving around and everything gets blurred out,” Waller says. “We can’t just throw more and more data at it because the amount of data is constrained by how fast the camera can read it out.” One technique developed in Waller’s lab gets around the inherent trade-off between resolution and field of view by taking multiple low-resolution images of live cell samples across a wide field of view and computationally combining them to create high-resolution (gigapixel-scale) images.
In a related development called 4D holography, holographers add the dimension of time to show 3D objects in motion—for example, a holographic reconstruction of embryonic blood flow.
Although all these holographic techniques promise to aid both basic research and biomedical applications like early disease detection, what interests most people are moving images of people and ordinary, non-microscopic objects—to bring sci-fi effects into our daily lives. Unlike pseudo-holography, a true holographic display would simulate a crucial characteristic of the way we see 3D objects in the real world: objects appear different from different points of view (parallax), and as we change our perspective this parallax experience is continuous, not jumpy, explains David Fattal of LEIA Inc. However, true holographic displays are currently impractical, Fattal says.
For one thing, creating diffraction patterns requires very small pixels—on the order of 100 nanometers, he says, whereas on today’s screens the smallest pixel size is about 20 to 50 microns. “You’re two or three orders of magnitude off, which means you’d need a screen of trillions of pixels, which is just ridiculous,” Fattal says.
Real-time motion is even harder: making a holographic image move at a normal video rate requires recomputing the diffraction fringe to every 1/60th of a second—too fast for anything short of a supercomputer, even with the fastest available algorithms.
Yet Fattal is aiming to achieve holographic video effects not on a supercomputer or even a desktop machine, but on the smartphone, the most popular computing platform on Earth. LEIA, which will make its screens available to consumers through deals with mobile device manufacturers, has announced plans to ship its first screens by the end of 2017.
Living cells, while transparent, are sufficiently thick to delay the phase of a light beam; by measuring phase delays, researchers can map the shapes and densities of cells.
The trick, Fattal says, is breaking the hologram down into pieces, rather than treating it as a single image. “We take a generic hologram—you can think of it as a linear superposition of different arrays of light or different pieces of light coming from the different regions on the diffracting plane—and we manage to simplify the hologram, to think of it as different pieces,” he says.
“The diffraction pattern can cater to different scenes—all we have to do is change the relative intensity of each portion,” Fattal explains. “It’s taking the best of holography in terms of image quality, but it’s simplifying it and stripping it of superfluous information, and therefore we can make it move very quickly.” Eventually, users will be able to interact with such 3D images by hovering over the smartphone screen rather than touching it, he says.
Such simplification is good enough, Fattal says, because of the limitations of the human visual system. A hologram that contains all the information about a certain scene, he points out, contains too much information—including information to which your eye would never be sufficiently sensitive. “So if you know how to simplify the holographic rendering process, then you don’t have to carry all the extra information, and that helps to make things move faster.”
Nehmettah, G., and Banerjee, P.P.
Applications of digital and analog holography in three-dimensional imaging, Advances in Optics and Photonics, Vol. 4, Issue 4, pp. 472–553 (2012) https://www.osapublishing.org/aop/abstract.cfm?uri=aop-4-4-472
Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., and Beausoleil, R.G.
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display, Nature, Vol. 497, March 21, 2013, http://www.nature.com/nature/journal/v495/n7441/full/nature11972.html
Kim, M.K.
Principles and techniques of digital holographic microscopy, SPIE Review, May 14, 2010, http://faculty.cas.usf.edu/mkkim/papers.pdf/2010%20SR%201%20018005.pdf
Tian, L., Li, X., Ramachandran, K., and Waller, L.
Multiplexed coded illumination for Fourier Ptychography with an LED array microscope, Biomedical Optics Express, Vol. 5, Issue 7, pp. 2376–2389 (2014), https://www.osapublishing.org/boe/abstract.cfm?uri=boe-5-7-2376
Join the Discussion (0)
Become a Member or Sign In to Post a Comment