Twenty years ago, supercomputer performance was characterized by millions of floating-point operations per second (megaflops). Today, there are more than 50 supercomputers on the Top 500 list (www.top500.org) delivering peak performance over a trillion floating-point operations per second (teraflops). This increased computational capacity has enabled scientists to generate more complex simulations operating on higher-resolution grids with many more time steps.
Scientific visualization [4] is the process of converting scientific data into visual form to increase the scientist’s understanding of the data. In some cases, images are rendered on a supercomputer to take advantage of its computational resources to process the enormous data files that can’t fit in the relatively limited memory of personal computers and workstations.
Unfortunately, the resolution of output displays has not kept pace. The early days of scientific visualization (late 1980s) were limited to output displays of approximately one million pixels (a megapixel). As simulations grew, the result was more and more data being crammed onto a megapixel. More recently, display technologies have advanced on several fronts; for example, high-definition television (HDTV) was standardized in 1996 by the Advanced Television Systems Committee in the U.S. at 1920 × 1080 pixels, substantially improving on the National Television Standards Committee standard video resolution of 640 × 480 pixels. Vendors have produced LCD panels at resolutions up to 3840 × 2400 (9 megapixels) [2]. Tiled displays [1] use multiple megapixel projectors to create a much larger display with more pixels.
The Hayden Planetarium at the American Museum of Natural History in New York City employs an example of an advanced multi-megapixel display. Completed in 1999, it is located inside the upper half of a 67-foot diameter sphere large enough to accommodate more than 400 viewers. Images are projected onto the dome through seven projectors running at 1280 × 1024 pixels, a total of 9.1 million pixels. The result is 7.34 million effective pixels after edge-blending and geometry correction to create a seamless image on the dome. An SGI Onyx2 is used to run real-time graphics programs and play pregenerated images at 30 frames per second on the dome.
For a sequence in its inaugural space show, “Passport to the Universe,” Planetarium scientists collaborated with computer scientists at the San Diego Supercomputer Center (SDSC) to produce a visualization of the Orion Nebula.1
Modeling the Nebula
The Orion Nebula is a vast cloud of dust and gas six light-years across, some 1,500 light-years from Earth, in our Milky Way Galaxy. It is the closest example of a stellar nursery in which the majority of new stars are formed in the galaxy. More than 1,500 stars make up the Trapezium cluster at the heart of the nebula. The four brightest stars are approximately 100,000 times brighter than the sun and cause a thin layer of gas in the nebula to glow from their radiation. Young stars shrouded in dust and gas are called proplyds [8] and were first detected in Hubble Space Telescope imagery. Under the right conditions, astronomers believe these proplyds take about 10 million years to form planetary systems.
Figure 1a shows a mosaic of the Orion Nebula from Hubble images. The reddish-brown tint is due to the intervening matter between the Earth and the nebula. The image was adjusted to remove the tint and give a more accurate representation of the color one would find inside the nebula [8].
The next step was refining a 3D polygonal model of the ionization layer (a thin layer of glowing gas) [9]. Finally, the model and the color-corrected mosaic were used to create a continuous 3D volumetric representation using volume scene graphs [6]. The result showed that a volume scene graph could be sampled at any resolution to create a discrete 3D volume grid. Due to memory constraints on potential rendering platforms, the volume scene graph was sampled at 652 × 326 × 652 locations; with 8B per sample, the discrete 3D volume required 1GB of memory.
Initially, only a few proplyds were to be included, but eventually 58 were created thanks to the power of the volume scene graphs. Figure 2 shows nine of the simulated proplyds; the upper-left corner represents HST-10, the most advanced proplyd visible in Hubble images. Each proplyd was modeled with a volume scene graph and sampled at various resolutions from 323 to 1283 depending on its complexity. These additional objects required 0.5GB of memory, for a total of 1.5GB for all the volumetric data. Finally, 883 stars were added to the scene based on their 3D location [8]; for a more detailed description of the modeling process, see [6, 7].
Rendering the Nebula
Volume rendering is the process of converting a 3D volume into a 2D image [3]. Before deciding which renderer to use (or writing a new one), I analyzed the rendering complexity. For each individual image on the dome, 9.1 million pixels would have to be rendered. The visualization would run 2.5 minutes, which, at 30 frames per second, would require 40 billion pixels, or more than 100GB of images. By comparison, the movie Toy Story contained approximately 500GB of images. Since we had less than one month to do the rendering, it was clear that a parallel renderer and significant computational resources would be needed. It would have to accommodate the following requirements:
- Parallel or multithreaded execution on multiple platforms (for redundancy and capacity);
- Perspective viewing model;
- Multiple volumes of arbitrary extent and resolution;
- Opacity independent of transparency; and
- Stars rendered based on their apparent magnitude.
The available (commercial and research) parallel renderers each had shortcomings. Most were eliminated by our need for a perspective viewing model, which substantially increases rendering complexity [5]. For efficiency, most volume renderers generate orthographic projections. They are suitable for producing arms-length views of objects but do not provide the visual cues needed for the kind of fly-thru that was needed to blend in with the rest of the show. Unfortunately, none of the available perspective volume renderers was capable of generating the sequence within our time constraints.
The proplyds were another complication. The central region of the nebula extends 14.3 light-years, while the proplyds extend an average of only 0.007 light-years. If the nebula and proplyds were sampled into one discrete 3D volume at 1,0003 sample points, a proplyd would be just a single sample point, therefore indistinguishable. Rendering most proplyds required at least 32 samples per side to capture the necessary detail. This result would require a 70,0003 volume containing the nebula and the proplyds. Even at 5B per sample, the volume would be 1.5TB. Rendering time generally varies linearly with the number of bytes in a volume, so the smaller the volume, the better. Using multiple volumes, we modeled the nebula and stellar objects using 1.5GB of data, or approximately 1,000 times faster than rendering a 1.5TB volume.
Transparency is normally defined as the inverse of opacity, that is, a fully opaque object does not allow light shining from behind it to pass through it. A fluorescing gas gives off light but does not block light shining from behind it. The modeling and rendering tools therefore had to allow independent values for transparency and opacity to correctly model and render the “glowing effect” of the gas in the nebula.
We rendered stars using a Gaussian footprint to simulate the spreading of light from stars that takes place on a photographic negative. We based the size of the footprint on the star’s apparent magnitude derived from its absolute brightness and its distance from the viewer. While not accurate during a fly-thru, it does simulate what people are used to seeing in astronomical images.
The System
I thus developed a multi-volume renderer (MVR) that was multithreaded by Greg Johnson of SDSC. I wrote it in C to be as portable as possible, as the final rendering platform had not been delivered before code development began. The project deadlines overlapped with installation of several new platforms at SDSC: an IBM RS/6000 SP parallel supercomputer; a Sun Microsystems E10000 high-end server; and a Cray MTA-1 parallel supercomputer. None of them was scheduled to be in production before our own deadline, so maximum flexibility was key. The multithreading was implemented on the MTA-1 using MTA-specific multithreading libraries and on the SP and on the E10000 using the OpenMP application program interface, then ported to the SP and the E10000. The source code required about 20 compiler directives to account for the differences in the platforms and their multithreading libraries.
In MVR, each pixel can be rendered independently using 1.3 million (1280 × 1024) threads per image. This number is sufficient for proper load balancing, but as more threads run simultaneously, memory conflicts (and subsequent stalls) occur, as shown in the table. The MTA-1 maintains a “ready pool” of threads to execute. When a memory stall occurs, a thread is swapped out and a new thread starts executing. Given a sufficient number of threads, this scheme effectively hides memory latency and results in the virtually optimal efficiency outlined in the table.
Once everything was modeled and the MVR completed, it looked like we would need three weeks of rendering—except for a stroke of luck. By November 1999, SDSC had taken partial delivery of an IBM RS/6000 SP system while its final 952 processors were assembled and tested at IBM in New York. The weekend before it was to be disassembled and readied for delivery to SDSC, Greg Johnson and Mark Duffield (IBM) traveled to IBM to generate the final sequence. Over a 12-hour period, they managed to render 28,504 1280 × 1024 images using the 952 processors. One multithreaded renderer ran for each eight-processor node to generate an image every two to three minutes. Once the rendering was done, it took almost 24 hours to transfer the images from the local disks to tape for transport to the Planetarium. Miraculously, only three images either weren’t rendered or didn’t transfer onto tape. Less than 24 hours later, the images were assembled into movies and displayed on the dome at the Planetarium—two weeks ahead of schedule.
While the visualization is inspiring, another version was still needed for other venues. After many iterations, we finalized a camera path for a “flat wall” display, and 4,500 images were rendered on the E10000 at SDSC and shown, for example, in the Electronic Theater at SIGGRAPH 2000 in New Orleans. Anticipating access to larger displays in the future, the same camera path was rendered at 6400 × 3072 (or 88 billion) pixels; it was produced with spare cycles on the E10000, or approximately 1,500 CPU days over a six-month period.
Figure 1b is a visualization of the Orion Nebula from the viewpoint of the Earth; the mosaic of Hubble images in Figure 1a is shown for purposes of comparison. Figure 3 is the Orion Nebula from the viewpoint of a spaceship inside the nebula. Along the back side of the “valley,” a steep “cliff” faces the four central stars glowing with a bright yellow-orange light. The cliff face is called the “bright bar” and is visible in the lower lefthand corner of Figures 1a and 1b. The proplyds are roughly a spherical ball of dust and gas with a central star, teardrop-shape since the side facing the four central stars receives full illumination while the distant side receives far less illumination. (For more images, stereo pairs, and information, see vis.sdsc.edu.)
The Future
Tens or hundreds of millions of pixels enable all kinds of visualization in diverse scientific fields, including chemistry and biology, besides astronomy and cosmology. But these additional pixels represent an added expense and require much more power for generating their content. Fortunately, solutions along all price points are becoming available, from the cheapest PC-based clusters [1] to the high-end supercomputing described here.
This Hayden Planetarium collaboration involved astronomers, computer scientists, and artists, but future collaborations are unbounded. A key result is that more than two million people as of June 2002 (mostly students from kindergarten to high school) have seen “Passport to the Universe,” immersing themselves in its scientific and technological advances. Perhaps some of them, experiencing the vastness, beauty, and wonder of space, will be inspired to become tomorrow’s astronomers and computer scientists.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment