Research and Advances
Artificial Intelligence and Machine Learning

The Virtual Oceanarium

This high-performance computer simulation of Europe's largest aquarium lets its human visitors interact with 25 species and about 1,000 individual creatures and plants.
Posted
  1. Introduction
  2. Four Main Software Components
  3. Artificial (Sea) Life
  4. Speeding Simulation
  5. Interaction By Way of the Presenter
  6. Conclusion and Outlook
  7. References
  8. Author
  9. Footnotes
  10. Figures

The Virtual Oceanarium simulates the Lisbon Oceanarium, Europe’s largest aquarium. Built as part of the World Fair Expo’98 in Portugal, the Oceanarium continues to be one of Lisbon’s main tourist attractions. For the Virtual Oceanarium, which was also part of the Fair and remains at the Fair’s site, a graphics supercomputer simulates the Oceanarium’s exterior and surroundings, as well as its interior, including marine life found both above and below the surface of its marine habitats.

Like its glass-and-water counterpart, the simulation includes a huge central tank filled with creatures from around the world in 3D stereo projection (see Figure 1). Smaller tanks surrounding the central tank represent four distinct ocean habitats: the Antarctic, a coral reef in the Indian Ocean, the rocky coast of the North Pacific, and the coastline of the Azores archipelago in the mid-North Atlantic (see Figure 2).

Visitors to the real Oceanarium view marine life through thick acrylic windows. Visitors to the Virtual Oceanarium have a far more intimate view of its submarine environs where they find themselves surrounded by hundreds of fishes and plants. They also see seabirds, reptiles, and invertebrates. To get there, they fly by virtual helicopter over the former Expo area (now called Expo Urbe), then enter the building and dive in. The artificial marine species they find there not only look realistic, they’re equipped with simulated behavior and perception. A professional human presenter helps the audience interact with the simulated creatures and learn about marine life.

Here, I describe how my colleagues and I at the Fraunhofer Institute for Computer Graphics developed the Virtual Oceanarium’s software architecture, as well as its artificial life simulation system and interaction paradigms.

The Virtual Oceanarium accommodates several hundred visitors at a time. Their experience is not just a matter of looking at a screen but takes place throughout an auditorium with interactively controlled aerial views, hallways, and animated virtual creatures by way of the stereo projection system, which includes a 12-square-meter screen as a window on the computer-generated environments. They see a continuous flow of images rendered for the left and right eyes in the correct perspective, as viewed through comfortable polarization glasses, creating the illusion of spatial depth. Navigating through the simulation, the presenter encourages them to express their interests, such as where to go next, what to view in detail, and which creature to examine closely, as well as to ask questions. Depending on their responses, the presentation can follow any number of paths, thus elevating the experience beyond the typical scripted movie plot or computer game.

Inside the simulated fish tanks, visitors encounter a variety of species native to the open seas, including manta ray, tuna, barracuda, mackerel, and shark, the latter followed around by pilot fish (remoras) cleaning their skin of parasites and feeding on the remnants of their recently devoured prey. Meanwhile, schools of silver mackerel dart across the auditorium and vanish into the deep blue water (realistic underwater lighting is achieved through real-time caustic effects), while a porcupine fish might inflate its body when feeling threatened by a nearby diver. Invertebrates, including corals and mussels, live on rocks, and a kelp forest functions as an animated background. However, not all this virtual marine life behaves as it would in the wild; for example, visitors can ride on the virtual sharks—and just about anything else they’d care to try—a thrilling experience.

The Indian Ocean Habitat is the Virtual Oceanarium’s most colorful place (see Figure 3) where communities of vivid tropical fish populate a coral reef. Visitors emerging from the reef see a tropical rain forest along the coastline of the Indian Ocean. Colder and rockier in appearance, the other three habitats include their own principle native species: penguins (Antarctic), sea otters (North Pacific), and puffins (North Atlantic).

Complementing this visual feast, quadraphonic sound adds another environmental dimension. When underwater, for example, visitors hear the sound of a diver’s breathing, and surface scenes are accompanied by the sounds of breaking waves, sea breezes, and birds.

Back to Top

Four Main Software Components

The Virtual Oceanarium consists of four main software components, all developed by researchers at Fraunhofer IDG:

  • The real-time rendering system, known as Y;
  • The interaction device abstraction layer, known as IDEAL, for virtual reality interaction devices;
  • A simulation system (specially developed for the project) for autonomous objects and virtual creatures; and
  • A main application (also specially developed for the project) for synchronization and supervision of the first three components.

Based on Open GL, the Y renderer uses a high-level application interface to render complex scenes photo-realistically in real time. The Y renderer, which has been part of our virtual reality system “Virtual Design” since 1994 [5], runs on desktop environments (a single Unix-based workstation, such as those from DEC, SGI, and Sun Microsystems) with high-resolution stereo projection and in a five-sided CAVE (Cave Automatic Virtual Environment), invented at the University of Illinois, Chicago, Electronic Visualization Laboratory.

IDEAL functions as the software interface to the various virtual reality interaction devices used in Virtual Design, including data gloves and tracking systems [3]. It implements a variety of classes of logical interaction devices, including “Location” (position in 3D space), “Orientation” (3D direction), “Space” (combination of location and space), “Valuator” (1D scalar value), and “Button” (0D event, any device button pressed or released). IDEAL arbitrarily and transparently maps these logical devices to physical devices connected to any network computer. Sophisticated communication protocols reduce latency, a function crucial in all virtual reality applications. More recently, Fraunhofer IDG researchers extended IDEAL to advanced interaction paradigms, including speech recognition and video-based tracking systems.

The simulation system is the Virtual Oceanarium’s core component, responsible for synchronization and load balancing, as well as for administering a set of so-called autonomous objects, or an abstract class of objects with generic behavior. A class creature, derived from the autonomous objects, implements the perception, behavior, locomotion, and appearance of the artificial creatures.

The class fish extends the creatures’ abilities by providing algorithms for collision avoidance, schooling, and other fish-specific behaviors. A class species collects attributes shared by members of the same species, including appearance, average size, perception, and relationships, such as predatory and symbiotic, with other species.

The main application reads configuration files for all marine and terrestrial species, implements user interaction through IDEAL, and synchronizes all modules. After initialization, it spawns processes for the rendering and simulation systems. An additional component delivers the 3D sound.

Back to Top

Artificial (Sea) Life

Figure 4 shows the model my colleagues and I devised to simulate each of the Virtual Oceanarium’s artificial creatures and its relationship with the environment. Most real-world animals perceive their environments mainly through optical and acoustic stimuli, along with smell and taste. Some fish species also have special senses for detecting and analyzing vibrations and electrical fields. For our simulations, we modeled perception as a creature’s general ability to query the world database. By setting such parameters as data range and search angle, as well as the kind of data requested (such as position, velocity, and size), a simulated creature’s perception can be tailored by the modeling process to conform to the abilities of its real-world counterpart.

A creature’s perception is sometimes concentrated on a few aspects of its environment, depending on its inner state. For example, a hungry fish tends to focus on finding food and is less interested in schooling or mating. When a predator approaches, a fish’s priority shifts to avoiding attack and focusing on finding a place to hide. Evaluating the environmental stimuli and the creature’s inner state, the behavior module embedded by the modeling process determines the appropriate behavior it needs to use.

The general set of behaviors consists of avoiding collisions, fleeing, feeding, hiding, and schooling. A behavior might also influence a creature’s inner state. For example, depending on the amount of food it consumes, a preditor reduces its hunger when it catches prey. Depending on the behavior it chooses, a creature also has to perform certain locomotive actions. Based on an underlying kinematics model, a fish might speed up or slow down or turn in a certain direction. To move itself, any animal—artificial and real-world alike—has to perform body actions; fish and other sea animals swim by moving their fins, sometimes even their whole bodies. We simulated this aspect of locomotion through the appearance module. Evaluating a creature’s actual acceleration and velocity, this module performs body movements and updates the world database. Seamless and realistic movements are achieved through a combination of geometry interpolations and special “level-of detail” techniques that reduce the complexity of the geometric appearance of graphical objects. If an object, say an artificial fish or plant, is close to the observer, the rendering result is a high level of detail; if the object is far away and small details would go unnoticed by the observer, a less complex representation is used.

We had to leave out some major aspects of real-world creatures. For example, these creatures lack the ability to learn. My colleagues and I summarized the variety of senses, including visual, acoustic, and taste, into a general range of perception. In light of this simplified view of cognition and behavior, we also didn’t include hunting, because this behavior is not observed in aquariums where all animals are fed by their human handlers.

After we defined the model for simulating each of the Virtual Oceanarium’s denizens, we evaluated the biology and behavior-science literature, seeking to answer such questions as: Does a certain species swim in schools, and if it does, how large are they; How does a certain species react in the presence of divers?; What does it do when it feels threatened?; and What other interesting behavior could be integrated into the simulation? The result was that we simulated and animated about 25 species.

Back to Top

Speeding Simulation

A typical viewer rates the quality of a virtual reality presentation according to two major criteria: content density and frame rate. Image resolution and realism are less important. Because a growing population of younger people are familiar with computers, they compare any such presentation with their favorite computer games. Performance standards today include a rendering frame rate of 30Hz and complex multilevel worlds. A presentation system unable to maintain a constantly high frame rate and an attractive display, even if it’s backed by million-dollar technology, loses the interest of its intended audience.

Unlike other approaches to developing artificial life and biology simulation, our model is relatively simple. We didn’t aim to map knowledge from biology and behavior science to a computer simulation; instead, we wanted to generate a realistic-looking visual simulation of a large number of artificial creatures. Therefore, one of the first things we did was to analyze the work of Demetri Terzopoulos (of the University of Toronto) [4], who applies spring-mass models to simulate the body movements of swimming fish while accounting for hydrodynamics. Terzopoulos’s fish “learn” to swim in a preprocessing phase of modeling development. His runtime results produce compelling, realistic, and aesthetic fish movements. The drawback is the cost of computing power during runtime; the equations of the spring-mass model have to be solved for each fish and simulation step. Even when running on high-end graphics workstations, only a small number of fishes can be simulated in real time this way.

To generate an underwater world with up to 1,000 individual creatures, we used a different approach. We modeled fish bodies with off-the-shelf computer animation systems, including those from Softimage and Alias|Wavefont, as textured polygonal geometry. We animated these bodies through the inverse kinematics tools in the programs. We then stored the animations in a number of phases, called “keyframes”; whenever the Virtual Oceanarium’s software is launched, these keyframes are loaded into the computer’s main memory (see Figure 5).

The left side of the figure shows three frozen states, or the keyframes, of a moving body. Each consists of a set of faces defining its surface. Seamless motion is achieved through continuous linear interpolation between the corresponding vertices of each keyframe. Because the number of surfaces and vertices to be rendered is a major limiting factor in real-time computer graphics, a visual simulation has to limit the complexity of the scene being displayed to a typical rendering frame rate of 20–30Hz. However, because realistic rendering requires precise geometric representation, or a large number of surfaces and vertices, we used level-of-detail techniques to optimize the trade-off between speed and image quality.

The figure’s top row shows body representations in several levels of detail in order of decreasing precision from left to right. The larger image, lower right, shows a realistic manta ray with texture mapping and lighting applied.

We also exploited the natural limits of underwater visibility, which depend on the opacity of water and local lighting conditions. For example, the range of a diver’s sight varies from only one to several tens of meters, depending on water quality and depth. Beyond this range, there is no need to simulate individual creatures. We employed a concept called “Aura” [1] to generate a volume of water with the diver in a central position, experimenting with cubic, ellipsoid, and spherical aura volumes. The number of creatures of a certain species inside an aura is computed by the Aura algorithm as the quotient of the volumes of the aura and the simulated environment (the fish tank in this case) multiplied by the overall population of the species. If the actual number of creatures inside an aura is too small, new creatures are generated. If the number is too high, creatures are deleted. The generation and destruction of creatures happens just outside the diver’s sight range, which also determines the general size of an aura volume. The result is that the population of artificial creatures is generally not a limiting factor in the simulation. Instead, the complexity of the displayed graphical geometry depends on the number of fishes visible simultaneously.

Simulating a set of creatures with perception in a common virtual environment is a computational problem with o(n2) complexity where o is the magnitude of complexity in terms of operations to be performed, depending on the problem size n, here the number of individual creatures. Each creature is potentially aware of every other creature in the entire Oceanarium. Even if a particular sight range limits the number of creatures that can be seen, the distance between each pair of creatures still has to be computed for each simulation step. Moreover, static scene geometry has to be taken into account to avoid collisions with obstacles. For static geometry, complexity increases with the increasing number of surfaces and the number of creatures.

We avoided this bottleneck by using a voxel grid. In a preprocessing stage, the whole scene space—a fish tank—is decomposed into a 3D array of cubic volumes, or voxels. Each voxel is tested in a preprocessing phase to determine whether it is occupied by scene geometry (an obstacle) or not occupied (the water). Moreover, each voxel contains creatures inside it. While the obstacle bit is constant over the whole simulation, the list of creatures inside a voxel varies, as the creatures move through the water.

Testing a creature’s particular goal for an obstacle can be done efficiently by a simple array access and bit test, identifying the memory address, or position in memory, of the voxel and determining whether the information is an “obstacle” or “water.” To identify whether other creatures in the neighborhood are being perceived, only a small number of voxels have to be examined by each creature’s perception simulation, a task whose complexity increases along with the size of the volume being considered. Using this method, we’ve reduced the computational complexity to o(1) for the obstacle test and o(n) for perception, where n is the average population density of the environment.

Back to Top

Interaction By Way of the Presenter

Interaction between visitors and the Virtual Oceanarium environment is performed directly by the presenter. We determined indirect interaction would be the best approach after studying a number of the project’s early public presentations in which individual visitors were allowed to navigate and interact directly. Although some visitors enjoyed being divers, most felt a little overwhelmed by having to navigate and maintain their orientation in a complex 3D environment. Even when we introduced collision detection and reduced the degree of navigational freedom, they still ran into obstacles and got lost. Moreover, because only a few people in a large audience might be allowed to try the interaction options, many others would feel frustrated.

Led by the presenter (functioning as the audience’s token diver), the interaction system follows two main interaction paradigms: direct and metaphoric. Direct interaction is when the diver freely roams the environment and the creatures react to his or her presence. For example, feeling threatened when a diver gets too close, a porcupine fish inflates its body, erecting stings, and schools of fish turn away or divide to avoid colliding with the diver. Metaphoric interaction is made possible through 3D menus, each including a ring of icons. For example, a diver selecting an icon triggers a certain reaction by the program or reveals a submenu offering more options. The diver can jump to a particular habitat by selecting its key species from the menu or, if inclined, hop on a shark, a manta ray, or a tuna.

Back to Top

Conclusion and Outlook

The Virtual Oceanarium offers a fairly large general-interest audience a fascinating and entertaining journey through a simulation of Europe’s largest aquarium in which the typical behavior of a variety of marine species can be viewed and reproduced as often as needed. Its core component is a real-time simulation system for producing artificial creatures designed to meet the audience’s expectations, running a constantly high frame rate and presenting a detailed virtual environment. The software’s modularity makes it possible to readily integrate new species and new behaviors; moreover, the models of the aquarium and its fish tanks can be modified or exchanged, and models of real-world coastlines can be added as needed.

My colleagues and I envision several possible future scenarios for the Virtual Oceanarium. Presented in other aquariums or at fairs and conventions, it can continue to represent the Lisbon Oceanarium. As an extension of its multimedia facilities, it can provide visitors a new perspective on the marine life in its tanks, as well as in the real oceans. Meanwhile, we would like to see a more permanent installation similar to a planetarium in which shows are developed and presented to school children. The system might also be used as a testbed for cybernetic models of behavior science and ecology. Instead of charts and tables, research results could be presented to the general public through a thoroughly entertaining interactive virtual environment.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Simulated sharks and a manta ray in the main tank.

F2 Figure 2. Virtual Oceanarium content structure.

F3 Figure 3. Coral reef in the Indian Ocean habitat.

F4 Figure 4. Object model of a virtual creature.

F5 Figure 5. A manta ray’s keyframes and levels of detail.

Back to top

    1. Benford, S. and Fahlén, L. Awareness, focus and aura—A spatial model of interaction in virtual worlds. In Proceedings of HCI International'93 (Orlando, Fla., Aug. 1993).

    2. Fröhlich, T. Das Virtuelle Ozeanarium. Thema Forschung 2 (1997), 58–64.

    3. Fröhlich, T. and Roth, M. Integration of multidimensional interaction devices in real-time computer graphics applications. In Proceedings of Eurographics 2000, Blackwell Publishers, U.K., 2000.

    4. Reiners, D. High-Quality Real-Time Tendering for Virtual Environments. Dipl. thesis, Technical University, Darmstadt, Germany, 1994.

    5. Terzopoulos, D., Tu, X., and Grzeszczuk, R. Artificial fish: Autonomous locomotion, perception, behavior and learning in a simulated physical world. Artific. Life 1 (1994).

    This project was supported by the Portuguese Ministry of Science and Technology, Madeira Technopolo and Portugal-Frankfurt'97 S.A. It has been performed under the auspices of the Center for Computer Graphics in Coimbra, Portugal, which also provided most of the exterior and interior models.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More