Research and Advances
Artificial Intelligence and Machine Learning Research highlights

Imaging the Propagation of Light Through Scenes at Picosecond Resolution

Posted
Imaging the Propagation of Light through Scenes at Picosecond Resolution, illustration
  1. Abstract
  2. 1. Introduction
  3. 2. Related Work
  4. 3. Capturing Space–Time Planes
  5. 4. Capturing Space–Time Volumes
  6. 5. Depicting Ultrafast Videos in 2D
  7. 6. Time Unwarping
  8. 7. Captured Scenes
  9. 8. Conclusions and Outlook
  10. Acknowledgments
  11. References
  12. Authors
  13. Footnotes
  14. Figures
Read the related Technical Perspective
Imaging the Propagation of Light through Scenes at Picosecond Resolution, illustration

We present a novel imaging technique, which we call femto-photography, to capture and visualize the propagation of light through table-top scenes with an effective exposure time of 1.85 ps per frame. This is equivalent to a resolution of about one half trillion frames per second; between frames, light travels approximately just 0.5 mm. Since cameras with such extreme shutter speed obviously do not exist, we first re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor’s spatial dimensions. We then introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through the scenes. Given this fast resolution and the finite speed of light, we observe that the camera does not necessarily capture the events in the same order as they occur in reality: we thus introduce the notion of time-unwarping between the camera’s and the world’s space–time coordinate systems, to take this into account. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique has already motivated new forms of computational photography, as well as novel algorithms for the analysis and synthesis of light transport.

Back to Top

1. Introduction

The way in which light travels through a scene—the paths it follows, how its intensity evolves over time—is an incommensurate source of information on the nature of such scene. From it, we can for instance obtain the geometry of the scene (even of those parts which are not visible to the camera),6, 11, 17 the reflectance of the objects present,12 or even derive other material properties.20 As such, it can play a very important role in a variety of fields, including computer graphics, computer vision, and scientific imaging in general, and have applications in medicine, defense, or industrial processes, to name a few. However, traditionally, light has been assumed to travel instantaneously through a scene (its speed assumed to be infinite), because conventional imaging hardware is very slow compared to the speed of light. Consequently, any information encoded in the time delays of light propagation is lost, and disambiguating light transport becomes an arduous, often impossible, task. In the past years, the joint design of novel optical hardware and smart computation (i.e., computational photography), has expanded the way we capture, analyze, and understand visual information; however, speed-of-light propagation has been largely unexplored at the macroscopic scale, due to the impossibility of capturing such information.

In this paper, we present a novel ultrafast imaging technique, which we term femto-photography, which allows us to capture movies of a scene at an effective frame rate of one half trillion frames per second. With this extremely high temporal resolution, we can obtain movies in which light travels less than a millimeter per frame (the duration of one frame is ca. 2 ps, i.e., 10−12 s), enabling us to capture light in motion as it travels through a scene. This allows us to see, for instance, a light pulse scattering inside a plastic bottle, or image formation in a mirror, or beam diffraction by a grating, as a function of time. Further, the captured, time-resolved data has the potential to allow us to determine light transport in complex scenes.

Our system makes use of femtosecond laser illumination, picosecond-accurate detectors, and mathematical reconstruction techniques to obtain and correctly visualize the final movie. In it, for each pixel of each frame, we have an intensity profile as a function of time, with a resolution of 1.85 ps.

Developing such time-resolved system is a challenging problem for several reasons. Our contribution in this work is to address these challenges and create the first prototype capable of capturing such temporal resolutions in macroscopic scenes. These challenges include: First, brute-force time exposures under 2 ps yield an impractical signal-to-noise (SNR) ratio, since during each exposure only a few photons will reach the sensor. To improve the SNR of the acquisition process, we exploit the statistical similarity of periodic light transport events; this allows recording multiple, ultrashort exposure times of one-dimensional views. These views are one-dimensional due to the second challenge we face: Suitable cameras to record 2D image sequences at this time resolution do not exist due to sensor bandwidth limitations. To solve this, we introduce a novel hardware implementation to sweep the exposures across a vertical field of view, to build 3D space–time data volumes. Third, comprehensible visualization of the captured time-resolved data is non-trivial: It is a novel type of data that we are not accustomed to seeing, and some observed effects can be counter-intuitive. We therefore create techniques for comprehensible visualization of this time-resolved data, including movies showing the dynamics of real-world light transport phenomena and the notion of peak-time, which partially overcomes the low-frequency appearance of integrated global light transport. Finally, direct measurements of events at this speed appear warped in space–time, because the finite speed of light implies that the recorded light propagation delay depends on camera position relative to the scene. To correct for this, and visualize events in their correct sequence, we introduce a time-unwarping technique, which accounts for the distortions in captured time-resolved information due to the finite speed of light.

In the following, we describe these contributions in detail. We explain our complete hardware, calibration, and data processing and visualization pipeline, and demonstrate its potential by acquiring time-resolved movies of significant light transport effects, including scattering, diffraction, or multiple diffuse interreflections. We further discuss possible applications of this new imaging modality, and the relevance of this work, not only in imaging, but also in areas such as bio-medical research or astronomy.

Back to Top

2. Related Work

*  2.1. Ultrafast devices

Repetitive illumination techniques used in incoherent LiDAR use cameras with typical exposure times on the order of hundreds of picoseconds, two orders of magnitude slower than our system.2 The fastest 2D continuous, real-time monochromatic camera operates at hundreds of nanoseconds per frame, with a spatial resolution of 200 × 200 pixels; this is less than one-third of what we achieve in this paper.3 Avalance photodetectors (APD) arrays can reach temporal resolutions of several tens of picoseconds if they are used in a photon starved regime where only a single photon hits a detector within a time window of tens of nanoseconds.1 Liquid nonlinear shutters actuated with powerful laser pulses have been used to capture single analog frames imaging light pulses at picosecond time resolution. Other sensors that use a coherent phase relation between the illumination and the detected light, such as optical coherence tomography (OCT), coherent LiDAR, light in flight holography, or white light interferometry, achieve femtosecond resolutions; however, they require light to maintain coherence (i.e., wave interference effects) during light transport, and are therefore unsuitable for indirect illumination, in which diffuse reflections remove coherence from the light. Last, simple streak sensors capture incoherent light at picosecond to nanosecond speeds, but are limited to a line or low resolution (20 × 20) square field of view.15

In contrast, our system is capable of recording and reconstructing space–time world information of incoherent light propagation in free-space, table-top scenes, at a resolution of up to 672 × 1000 pixels and under 2 ps per frame. The varied range and complexity of the scenes we can capture allows us to visualize the dynamics of global illumination effects, such as scattering, specular reflections, inter-reflections, subsurface scattering, caustics, and diffraction.

*  2.2. Time-resolved imaging

Recent advances in time-resolved imaging have been exploited to recover geometry and motion around corners,4,10,14,16,17 as well as albedo from a single view point.12 However, they all share some fundamental limitations (such as capturing only third-bounce light) that make them unsuitable for capturing videos of light in motion. The principles we develop in this paper were first demonstrated by the authors in two previous publications18,19; this has given rise to alternative, inexpensive PMD-based approaches (e.g., Ref.5), although the achieved temporal resolution is in the order of nanoseconds (instead of picoseconds). Wu et al.21 present a rigorous analysis of transient light transport in the frequency domain, and show how it can be applied to build a bare-sensor ultrafast imaging system. Last, two recent publications provide valuable tools for time-resolved imaging: Wu and colleagues20 separate direct and global illumination components from time-resolved data captured with the system we describe in this paper, by analyzing the time profile of each pixel, and demonstrate a number of applications; whereas Jarabo et al.7 present a framework for the efficient simulation of light-in-flight movies, which enables analysis-by-synthesis approaches for the analysis of transient light transport.

Back to Top

3. Capturing Space–Time Planes

We capture time scales orders of magnitude faster than the exposure times of conventional cameras, in which photons reaching the sensor at different times are integrated into a single value, making it impossible to observe ultrafast optical phenomena. The system described in this paper has an effective exposure time down to 1.85 ps; since light travels at 0.3 mm/ps, light travels approximately 0.5 mm between frames in our reconstructed movies.

*  3.1. System

An ultrafast setup must overcome several difficulties in order to accurately measure a high-resolution (both in space and time) image. First, for an unamplified laser pulse, a single exposure time of less than 2 ps would not collect enough light, so the SNR would be unworkably low. As an example, for a table-top scene illuminated by a 100 W bulb, only about 1 photon on average would reach the sensor during a 2 ps open-shutter period. Second, because of the time scales involved, synchronization of the sensor and the illumination must be executed within picosecond precision. Third, standalone streak sensors sacrifice the vertical spatial dimension in order to code the time dimension, thus producing x–t images. As a consequence, their field of view is reduced to a single horizontal line of view of the scene.

We solve these problems with our ultrafast imaging system, outlined in Figure 2. The light source is a femtosecond Kerr lens mode-locked Ti:Sapphire laser, which emits 50-fs with a center wavelength of 795 nm, at a repetition rate of 75 MHz and average power of 500 mW. In order to see ultrafast events in a scene with macro-scaled objects, we focus the light with a lens onto a Lambertian diffuser, which then acts as a point light source and illuminates the entire scene with a spherically shaped pulse. Alternatively, if we want to observe pulse propagation itself, rather than the interactions with large objects, we direct the laser beam across the field of view of the camera through a scattering medium (see the bottle scene in Figure 1).

Because all the pulses are statistically identical, we can record the scattered light from many of them and integrate the measurements to average out noise. The result is a signal with a high SNR. To synchronize the illumination with the streak sensor (Hamamatsu C5680), we split off a portion of the beam with a glass slide and direct it onto a fast photo-detector connected to the sensor, so that both detector and illumination operate synchronously (Figure 2a and b).

*  3.2. Capturing space–time planes

The streak sensor then captures an x–t image of a certain scanline (i.e., a line of pixels in the horizontal dimension) of the scene with a space–time resolution of 672 × 512. The exact time resolution depends on the amplification of an internal sweep voltage signal applied to the streak sensor. With our hardware, it can be adjusted from 0.30 to 5.07 ps. Practically, we choose the fastest resolution that still allows for capture of the entire duration of the event. In the streak sensor, a photocathode converts incoming photons, arriving from each spatial location in the scanline, into electrons. The streak sensor generates the x–t image by deflecting these electrons, according to the time of their arrival, to different positions along the t-dimension of the sensor (see Figure 2b and c). This is achieved by means of rapidly changing the sweep voltage between the electrodes in the sensor. For each horizontal scanline, the camera records a scene illuminated by the pulse and averages the light scattered by 4.5 × 108 pulses (see Figure 2d and e).

*  3.3. Performance validation

To characterize the streak sensor, we compare sensor measurements with known geometry and verify the linearity, reproducibility, and calibration of the time measurements. To do this, we first capture a streak image of a scanline of a simple scene: a plane being illuminated by the laser after hitting the diffuser (see Figure 3, left). Then, by using a Faro digitizer arm, we obtain the ground truth geometry of the points along that plane and of the point of the diffuser hit by the laser; this allows us to compute the total travel time per path (diffuser-plane-streak sensor) for each pixel in the scanline. We then compare the travel time captured by our streak sensor with the real travel time computed from the known geometry. The graph in Figure 3 (right) shows agreement between the measurement and calculation.

Back to Top

4. Capturing Space–Time Volumes

Although the synchronized, pulsed measurements overcome SNR issues, the streak sensor still provides only a one-dimensional movie. Extension to two dimensions requires unfeasible bandwidths: a typical dimension is roughly 103 pixels, so a three-dimensional data cube has 109 elements. Recording such a large quantity in a 10−9 s (1 ns) time widow requires a bandwidth of 1018 byte/s, far beyond typical available bandwidths.

We solve this acquisition problem by again utilizing the synchronized repeatability of the hardware: A mirror-scanning system (two 9 cm × 13 cm mirrors, see Figure 2a) rotates the camera’s center of projection, so that it records horizontal slices of a scene sequentially. We use a computer-controlled, 1-rpm servo motor to rotate one of the mirrors and consequently scan the field of view vertically. The scenes are about 25 cm wide and placed about 1 m from the camera. With high gear ratios (up to 1:1000), the continuous rotation of the mirror is slow enough to allow the camera to record each line for about 6 s, requiring about one hour for 600 lines (our video resolution). We generally capture extra lines, above and below the scene (up to 1000 lines), and then crop them to match the aspect ratio of the physical scenes before the movie is reconstructed.

These resulting images are combined into one matrix, Mijk, where i = 1, …, 672 and k = 1, …, 512 are the dimensions of the individual x–t streak images, and j = 1, …, 1000 addresses the second spatial dimension y. For a given time instant k, the submatrix Nij contains a two-dimensional image of the scene with a resolution of 672 × 1000 pixels, exposed for as short as 1.85 ps. Combining the x–t slices of the scene for each scanline yields a 3D x–y–t data volume, as shown in Figure 4 (left). An x–y slice represents one frame of the final movie, as shown in Figure 4 (right).

Back to Top

5. Depicting Ultrafast Videos in 2D

We have explored several ways to visualize the information contained in the captured x–y–t data cube in an intuitive way. First, contiguous Nij slices can be played as the frames of a movie. Figure 1 (bottom row) shows a captured scene (bottle) along with several representative Nij frames (effects are described for various scenes in Section 7). However, understanding all the phenomena shown in a video is not a trivial task, and movies composed of x–y frames such as the ones shown in Figure 8 may be hard to interpret. Merging a static photograph of the scene from approximately the same point of view with the Nij slices aids in the understanding of light transport in the scenes (see movies at the project pages). Although straightforward to implement, the high dynamic range of the streak data requires a nonlinear intensity transformation to extract subtle optical effects in the presence of high intensity reflections. We employ a logarithmic transformation to this end.

We have also explored single-image methods for intuitive visualization of full space–time propagation, such as the color-coding in Figure 1 (right), which we describe next.

*  5.1. Integral photo fusion

By integrating all the frames in novel ways, we can visualize and highlight different aspects of the light flow in one photo. Our photo fusion results are calculated as Nij = Σ wkMijk, {k = 1, …, 512}, where wk is a weighting factor determined by the particular fusion method. We have tested several different methods, of which two were found to yield the most intuitive results: the first one is full fusion, where wk = 1 for all k. Summing all frames of the movie provides something resembling a black and white photograph of the scene illuminated by the laser, while showing time-resolved light transport effects. An example is shown in Figure 5 (left) for the alien scene (more information about the scene is given in Section 7). A second technique, rainbow fusion, takes the fusion result and assigns a different RGB color to each frame, effectively color-coding the temporal dimension. An example is shown in Figure 5 (middle).

*  5.2. Peak time images

The inherent integration in fusion methods, though often useful, can fail to reveal the most complex or subtle behavior of light. As an alternative, we propose peak time images, which illustrate the time evolution of the maximum intensity in each frame. For each spatial position (i, j) in the x–y–t volume, we find the peak intensity along the time dimension, and keep information within two time units to each side of the peak. All other values in the streak image are set to zero, yielding a more sparse space–time volume. We then color-code time and sum up the x–y frames in this new sparse volume, in the same manner as in the rainbow fusion case but use only every 20th frame in the sum to create black lines between the equi-time paths, or isochrones. This results in a map of the propagation of maximum intensity contours, which we term peak time image. These color-coded isochronous lines can be thought of intuitively as propagating energy fronts. Figure 5 (right) shows the peak time image for the alien scene and Figure 1 (top, middle) shows the captured data for the bottle scene depicted using this visualization method. As explained in the next section, this visualization of the bottle scene reveals significant light transport phenomena that could not be seen with the rainbow fusion visualization.

Back to Top

6. Time Unwarping

Visualization of the captured movies (Sections 5 and 7) reveals results that are counter-intuitive to theoretical and established knowledge of light transport. Figure 1 (top, middle) shows a peak time visualization of the bottle scene, where several abnormal light transport effects can be observed: (1) the caustics on the floor, which propagate towards the bottle, instead of away from it; (2) the curved spherical energy fronts in the label area, which should be rectilinear as seen from the camera; and (3) the pulse itself being located behind these energy fronts, when it would need to precede them. These are due to the fact that usually light propagation is assumed to be infinitely fast, so that events in world space are assumed to be detected simultaneously in camera space. In our ultrafast photography setup; however, this assumption no longer holds, and the finite speed of light becomes a factor: we must now take into account the time delay between the occurrence of an event and its detection by the camera sensor.

We therefore need to consider two different time frames, namely world time (when events happen) and camera time (when events are detected). This duality of time frames is explained in Figure 6: light from a source hits a surface first at point P1 = (i1, j1) (with (i, j) being the x–y pixel coordinates of a scene point in the x–y–t data cube), then at the farther point P2 = (i2, j2), but the reflected light is captured in the reverse order by the sensor, due to different total path lengths (z1 + d1 > z2 + d2). Generally, this is due to the fact that, for light to arrive at a given time instant t0, all the rays from the source, to the wall, to the camera, must satisfy zi + di = ct0, so that isochrones are elliptical. Therefore, although objects closer to the source receive light earlier, they can still lie on a higher-valued (later-time) isochrone than farther ones.

In order to visualize all light transport events as they have occurred (not as the camera captured them), we transform the captured data from camera time to world time, a transformation which we term time unwarping. Mathematically, for a scene point P = (i, j), we apply the following transformation:

eq01.gif

where cacm5909_p.gif and tij represent camera and world times respectively, c is the speed of light in vacuum, η the index of refraction of the medium, and zij is the distance from point P to the camera. For our table-top scenes, we measure this distance with a Faro digitizer arm, although it could be obtained from the data and the known position of the diffuser, as the problem is analogous to that of bi-static LiDAR. We can thus define light travel time from each point (i, j) in the scene to the camera as cacm5909_q.gif . Then, time unwarping effectively corresponds to offsetting data in the x–y–t volume along the time dimension, according to the value of Δti for each (i, j) point, as shown in Figure 7.

In most of the scenes, we only have propagation of light through air, for which we take η ≈ 1. For the bottle scene, we assume that the laser pulse travels along its longitudinal axis at the speed of light, and that only a single scattering event occurs in the liquid inside. We take η = 1.33 as the index of refraction of the liquid and ignore refraction at the bottle’s surface. Our unoptimized Matlab code runs at about 0.1 s per frame. A time-unwarped peak-time visualization of the whole of this scene is shown in Figure 1 (right). Notice how now the caustics originate from the bottle and propagate outward, energy fronts along the label are correctly depicted as straight lines, and the pulse precedes related phenomena, as expected.

Back to Top

7. Captured Scenes

We have used our ultrafast photography setup to capture interesting light transport effects in different scenes. Figure 8 summarizes them, showing representative frames and peak time visualizations (please also refer to the full movies, which can be found in the project pages: femtocamera.info and http://giga.cps.unizar.es/~ajarabo/pubs/femtoSIG2013/). The exposure time for our scenes is between 1.85 ps for the crystal scene, and 5.07 ps for the bottle and tank scenes, which required imaging a longer time span for better visualization. Overall, observing light in such slow motion reveals both subtle and key aspects of light transport. We provide here brief descriptions of the light transport effects captured in the different scenes.

*  7.1. Bottle

This scene is shown in Figure 1 (bottom row), and has been used to introduce time-unwarping. A plastic bottle, filled with water diluted with milk, is directly illuminated by the laser pulse, entering through the bottom of the bottle along its longitudinal axis. The pulse scatters inside the liquid; we can see the propagation of the wavefronts. The geometry of the bottle neck creates some interesting lens effects, making light look almost like a fluid. Most of the light is reflected back from the cap, while some is transmitted or trapped in subsurface scattering phenomena. Caustics are generated on the table.

*  7.2. Tomato-tape

This scene shows a tomato and a tape roll, with a wall behind them. The propagation of the spherical wavefront, after the laser pulse hits the diffuser, can be seen clearly as it intersects the floor and the back wall (A, B). The inside of the tape roll is out of the line of sight of the light source and is not directly illuminated. It is illuminated later, as indirect light scattered from the first wave reaches it (C). Shadows become visible only after the object has been illuminated. The more opaque tape darkens quickly after the light front has passed, while the tomato continues glowing for a longer time, indicative of stronger subsurface scattering (D).

*  7.3. Alien

A toy alien is positioned in front of a mirror and wall. Light interactions in this scene are extremely rich, due to the mirror, the multiple interreflections, and the subsurface scattering in the toy. The video shows how the reflection in the mirror is actually formed: direct light first reaches the toy, but the mirror is still completely dark (E); eventually light leaving the toy reaches the mirror, and the reflection is dynamically formed (F). Subsurface scattering is clearly present in the toy (G), while multiple direct and indirect interactions between wall and mirror can also be seen (H).

*  7.4. Crystal

A group of sugar crystals is directly illuminated by the laser from the left, acting as multiple lenses and creating caustics on the table (I). Part of the light refracted on the table is reflected back to the candy, creating secondary caustics on the table (J). Additionally, scattering events are visible within the crystals (K).

*  7.5. Tank

A reflective grating is placed at the right side of a tank filled with milk diluted in water. The grating is taken from a commercial spectrometer, and consists of an array of small, equally spaced rectangular mirrors. The grating is blazed: mirrors are tilted to concentrate maximum optical power in the first order diffraction for one wavelength. The pulse enters the scene from the left, travels through the tank (L), and strikes the grating. The grating reflects and diffracts the beam pulse (M). The different orders of the diffraction are visible traveling back through the tank (N). As the figure (and the captured movie) shows, most of the light reflected from the grating propagates at the blaze angle.

Back to Top

8. Conclusions and Outlook

Since the initial publication of this work numerous publications have advanced this field by improving numerical models and introducing new, more accessible capture technology. Heide et al.5 and Kadambi et al.9 introduced methods of low resolution time of flight capture using inexpensive photonic mixer devices. These new devices have been used in different applications, for example to see around corners,6, 13 although their temporal resolution is still orders of magnitude lower than our system. Laurentzis and Velten11 have recently demonstrated seeing around corners using intensified gated CCD cameras that are the state of the art in gated viewing applications and are available for example in military vehicles.

Optimization of the system hardware and software requires further advances in optics, material science, and compressive sensing. Beyond the potential in artistic and educational visualization, we hope our work will spawn new research in computer graphics and computation imaging techniques towards useful forward and inverse analysis of light interactions, which in turn will influence the rapidly emerging field of ultra fast imaging.

Future research involves investigating other ultrafast phenomena such as propagation of light in anisotropic media and photonic crystals, or novel applications in scientific visualization (to understand ultrafast processes), medicine (to image and reconstruct subsurface elements), material engineering (to analyze material properties), or quality control (to detect faults in structures). This may in turn introduce new challenges in the realm of computer graphics, to provide new insights via comprehensible simulations and new data structures to render transient light transport. For instance, our work has recently inspired a novel method for the efficient simulation on time-resolved light transport,7 while relativistic rendering techniques have been developed using our captured data, departing the common assumption of constant irradiance over the surfaces8 (Figure 9).

Back to Top

Acknowledgments

Belen Masia would like to acknowledge the support of the Max Planck Center for Visual Computing and Communication. Diego Gutierrez would like to acknowledge the support of the Spanish Ministry of Science and Innovation (project Lightslice), the BBVA Foundation, and a Faculty Research Award from Google.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. What does the world look like at the speed of light? Our new computational photography technique allows us to visualize light in ultra-slow motion, as it travels and interacts with objects in table-top scenes. We capture photons with an effective temporal resolution of less than 2 ps per frame. Top row, left: a false color, single streak image from our sensor. Middle: time lapse visualization of the bottle scene, as directly reconstructed from sensor data. Right: time-unwarped visualization, taking into account the fact that the speed of light can no longer be considered infinite (see the main text for details). Bottom row: original scene through which a laser pulse propagates, followed by different frames of the complete reconstructed video. For this and other results in the paper, we refer the reader to the videos included in the project pages: femtocamera.info and http://giga.cps.unizar.es/~ajarabo/pubs/femtoSIG2013/.

F2 Figure 2. (a) Photograph of our ultrafast imaging system setup. The DSLR camera takes a conventional photo for comparison. (b) In order to capture a single 1D space–time photo, a laser beam strikes a diffuser, which converts the beam into a spherical energy front that illuminates the scene; a beamsplitter and a synchronization detector enable synchronization between the laser and the streak sensor. (c) After interacting with the scene, photons enter a horizontal slit in the camera and strike a photocathode, which generates electrons. These are deflected at different angles as they pass through a microchannel plate, by means of rapidly changing the voltage between the electrodes. The CCD records the horizontal position of each pulse and maps its arrival time to the vertical axis, depending on how much the electrons have been deflected. (d) We focus the streak sensor on a single narrow scanline of the scene. (e) Sample image taken by the streak sensor. The horizontal axis (672 pixels) records the photons’ spatial locations in the acquired scanline, while the vertical axis (1 ns window in our implementation) codes their arrival time. Rotating the adjustable mirrors shown in (a) allows for scanning of the scene in the y-axis and generation of ultrafast 2D movies such as the one visualized in Figure 1 (b–d, credit: Greg Gbur).

F3 Figure 3. Performance validation of our system. Left: Measurement setup used to validate the data. We use a single streak image representing a line of the scene and consider the centers of the white patches because they are easily identified in the data. Right: Graph showing pixel position versus total path travel time captured by the streak sensor (red) and calculated from measurements of the checkerboard plane position with a Faro digitizer arm (blue). Inset: PSF, and its Fourier transform, of our system.

F4 Figure 4. Left: Reconstructed x–y–t data volume by stacking individual x–t images (captured with the scanning mirrors). Right: An x–y slice of the data cube represents one frame of the final movie.

F5 Figure 5. Three visualization methods for the alien scene. From left to right, more sophisticated methods provide more information and an easier interpretation of light transport in the scene.

F6 Figure 6. Understanding reversal of events in captured videos. Left: Pulsed light scatters from a source, strikes a surface (e.g., at P1 and P2), and is then recorded by a sensor. Time taken by light to travel distances z1 + d1 and z2 + d2 is responsible for the existence of two different time frames and the need of computational correction to visualize the captured data in the world time frame. Right: Light appears to be propagating from P2 to P1 in camera time (before unwarping), and from P1 to P2 in world time, once time-unwarped. Extended, planar surfaces will intersect constant-time paths to produce either elliptical or circular fronts.

F7 Figure 7. Time unwarping in 1D for a streak image (x–t slice). Left: Captured streak image; shifting the time profile down in the temporal dimension by Δt allows for the correction of path length delay to transform between time frames. Center: The graph shows, for each spatial location xi of the streak image, the amount Δti that point has to be shifted in the time dimension of the streak image. Right: Resulting time-unwarped streak image.

F8 Figure 8. More scenes captured with our setup (refer to Figure 1 for the bottle scene). For each scene, from left to right: photograph of the scene (taken with a DSLR camera), a series of representative frames of the reconstructed movie, and peak time visualization of the data. The full movies can be found in the project pages: femtocamera.info and http://giga.cps.unizar.es/~ajarabo/pubs/femtoSIG2013/. Note that the viewpoint varies slightly between the DSLR and the streak sensor.

F9 Figure 9. Our work has inspired follow-up work in the field of computer graphics with the development of simulation frameworks departing the assumption of infinite speed of light (top, time-resolved rendering using our peak time visualization of a volumetric caustic),7 including the simulation of relativistic effects due to ultrafast camera motion (bottom, simulation of frames recorded by an accelerating camera in a scene captured using our system).8

UF1 Figure. Watch the authors discuss their work in this exclusive Communications video. http://cacm.acm.org/videos/imaging-the-propagation-of-light-through-scenes-at-picosecond-resolution

Back to top

    1. Charbon, E. Will avalanche photodiode arrays ever reach 1 megapixel? In International Image Sensor Workshop (Ogunquit, ME, 2007), 246–249.

    2. Colaço, A., Kirmani, A., Howland, G.A., Howell, J.C., Goyal, V.K. Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Providence, RI, June 2012), IEEE, 96–102.

    3. Goda, K., Tsia, K.K., Jalali, B. Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena. Nature 458 (2009), 1145–1149.

    4. Gupta, O., Willwacher, T., Velten, A., Veeraraghavan, A., Raskar, R. Reconstruction of hidden 3D shapes using diffuse reflections. Opt. Expr. 20 (2012). 19096–19108.

    5. Heide, F., Hullin, M.B., Gregson, J., Heidrich, W. Low-budget transient imaging using photonic mixer devices. ACM Trans. Graph. 32, 4 (2013), 45:1–45:10.

    6. Heide, F., Xiao, L., Heidrich, W., Hullin, M.B. Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors. In CVPR (June 2014).

    7. Jarabo, A., Marco, J., Muñoz, A., Buisan, R., Jarosz, W., Gutierrez, D. A framework for transient rendering. ACM Trans. Graph. 33, 6 (2014), 177:1–177:10.

    8. Jarabo, A., Masia, B., Velten, A., Barsi, C., Raskar, R., Gutierrez, D. Relativistic effects for time-resolved light transport. Comput. Graph. Forum (2015) to appear. DOI: 10.1111/cgf.12604.

    9. Kadambi, A., Whyte, R., Bhandari, A., Streeter, L., Barsi, C., Dorrington, A., Raskar, R. Coded time of flight cameras: Sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph. 32, 6 (2013), 167:1–167:10.

    10. Kirmani, A., Hutchison, T., Davis, J., Raskar, R. Looking around the corner using ultrafast transient imaging. Int. J. Comp. Vision 95, 1 (2011), 13–28.

    11. Laurenzis, M., Velten, A. Nonline-of-sight laser gated viewing of scattered photons. Opt. Eng. 53, 2 (2014), 023102–023102.

    12. Naik, N., Zhao, S., Velten, A., Raskar, R., Bala, K. Single view reflectance capture using multiplexed scattering and TOF imaging. ACM Trans. Graph. 30 (2011), 171:1–171:10.

    13. O'Toole, M., Heide, F., Xiao, L., Hullin, M.B., Heidrich, W., Kutulakos, K.N. Temporal frequency probing for 5D transient analysis of global light transport. ACM Trans. Graph. 33, 4 (2014), 87:1–87:11.

    14. Pandharkar, R., Velten, A., Bardagjy, A., Bawendi, M., Raskar, R. Estimating motion and size of moving non-line-of-sight objects in cluttered environments. In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Colorado Springs, CO, June 2011), IEEE, 265–272.

    15. Qu, J., Liu, L., Chen, D., Lin, Z., Xu, G., Guo, B., Niu, H. Temporally and spectrally resolved sampling imaging with a specially designed streak camera. Opt. Lett. 31 (2006), 368–370.

    16. Velten, A., Fritz, A., Bawendi, M.G., Raskar, R. Multibounce time-of-flight imaging for object reconstruction from indirect light. In Conference for Lasers and Electro-Optics (OSA, 2012).

    17. Velten, A., Willwacher, T., Gupta, O., Veeraraghavan, A., Bawendi, M.G., Raskar, R. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 3, 745 (2012), 745:1–745:8.

    18. Velten, A., Wu, D., Jarabo, A., Masia, B., Barsi, C., Joshi, C., Lawson, E., Bawendi, M.G., Gutierrez, D., Raskar, R. Femto-photography: Capturing and visualizing the propagation of light. ACM Trans. Graph. 32, 4 (2013), 44:1–44:8.

    19. Velten, A., Wu, D., Jarabo, A., Masia, B., Barsi, C., Lawson, E., Joshi, C., Gutierrez, D., Bawendi, M.G., Raskar, R. Relativistic ultrafast rendering using time-of-flight imaging. In ACM SIGGRAPH Talks (2012).

    20. Wu, D., Velten, A., O'Toole, M., Masia, B., Agrawal, A., Dai, Q., Raskar, R. Decomposing global light transport using time of flight imaging. Int. J. Comput. Vision 107, 2 (April 2014), 123–138.

    21. Wu, D., Wetzstein, G., Barsi, C., Willwacher, T., O'Toole, M., Naik, N., Dai, Q., Kutulakos, K., Raskar, R. Frequency Analysis of Transient Light Transport with Applications in Bare Sensor Imaging. Springer, Berlin, Heidelberg, 2012, 542–555.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More