Research and Advances
Architecture and Hardware

Ocean and Climate Modeling

Only the most advanced parallel computers are fast enough to produce high-quality ocean simulations and accurate global climate predictions of temperature and precipitation.
Posted
  1. Introduction
  2. Progress in Ocean Modeling
  3. Ocean Currents and Their Instabilities
  4. Multiyear Ocean Variations
  5. Climate Modeling
  6. Parallel Climate Model
  7. Ocean Climate Equilibrium
  8. Simulating El Niño
  9. Prospects for Climate Modeling
  10. Conclusion
  11. References
  12. Author
  13. Figures
  14. Tables

Climate prediction has been regarded by researchers as a huge computational problem since the first supercomputers emerged 25 years ago. It’s also a problem with great practical importance, because it is increasingly clear that the Earth’s climate is warming in response to greenhouse gases produced by fossil fuel consumption and deforestation. The four warmest years in the last 600 (derived through tree-ring and ice-core data to extend back the instrumental record) occurred during the 1990s. Mechanisms other than the greenhouse effect, such as volcanic activity and changes in the sun’s brightness, cannot explain the temperature increases of the past 100 years [3].

The challenge for scientists and software engineers is how to predict the evolution of temperature and precipitation with sufficient regional detail to be useful to people around the world. In addition to climate change, these scientists and engineers would like to be able to model and predict out to as yet unknown limits the significant natural variations of climate over interannual, decadal, and century time scales.

Climate problems are ocean problems in many respects. The ocean is commonly regarded as the flywheel of the climate system, since it retards change by way of its high heat capacity and varies with longer periods than the atmosphere alone. Moreover, the ocean and atmosphere interact with each other dramatically to produce coupled oscillations, such as that of El Niño and the Southern Oscillation [5], as well as longer-period phenomena in the North Atlantic, North Pacific, and the Southern Ocean near Antarctica. Thus, it is paramount that any effort to predict climate change and climate variations involves a more accurate treatment of the global ocean. This means that strong narrow currents, such as the Gulf Stream, which runs north along the east coast of the U.S., then across the North Atlantic toward Europe, must be represented with proper width, extent, and strength in computer models and that internal disturbances, such as El Niño, must be able to propagate freely in the simulations. The fact that the actual Gulf Stream changes dramatically in its cross-stream direction with only a few tenths of a degree change in latitude or longitude suggests that ocean models need grid meshes as finely spaced as 1/10 of a degree (roughly 7 miles, or 11 km) to depict the major heat transport of strong ocean currents.

In addition, damping associated with the use of numerical methods is too great to allow propagation of El Niño and other climatic disturbances unless the grid spacing is smaller than about 1/2 degree in latitude and longitude. The need for fine oceanic grid spacing, along with global coverage and extended time integrations, make climate prediction an especially challenging computational problem.

Back to Top

Progress in Ocean Modeling

Ocean modeling began in the 1960s and has always been computer limited [6]. In the early 1970s, the small widths of some major ocean currents became apparent to oceanographers. It was not until 1986, with the advent of the four-processor vector Cray X-MP supercomputer and its secondary memory of 256 megawords, that global ocean modeling on fine grids could begin. Even then, the number of wall-clock hours per ocean year was so great that only a few decades of simulation could even be attempted. However, future vector machines seemed to promise performance that would dramatically improve and produce sustained execution rates of nearly 100 billion floating point operations per second (flops), or 100Gflops. I described these machines in my talk at the Supercomputing ’88 conference, along with their ocean-grid configurations and clock times per simulated year [7]. The timings were derived by scaling the known properties of an existing model already optimized for parallel vector machines. The increased machine speeds would allow larger problems to be solved in shorter periods of time.

The 1988 projection of future machine performance was not particularly successful, however, since many of the machines never made it into commercial production. Nevertheless, other machines would eventually push parallel computing speeds to hundreds of times faster than those available in 1988 (see Table 1). The table includes three supercomputing architectural categories: parallel vector processor (PVP), massively parallel processor (MPP), and distributed shared memory (DSM).

The table’s optimized speed estimates were derived from scaling the performances of a PVP code (the Parallel Ocean Climate Model [8]) in the top category and a newer MPP and DSM code (the Parallel Ocean Program at Los Alamos National Laboratory) in the lower two categories (see www.oc.nps.navy.mil/ ~braccio/woce.html and www.acl.lanl.gov/climate/ models/pop). For MPP/DSM machines, ocean-model performance is usually about 10% of a manufacturer’s cited peak speeds, due to limitations associated with memory bandwidth and interprocessor communication. In the PVP list in the table, the first four machines are from Cray and the last two are from NEC, a leading Japanese supercomputer manufacturer. In the MPP list, the first two are from Thinking Machines and Cray Research and the last from Fujitsu, another leading Japanese supercomputer manufacturer.

The DSM list includes only two configurations, both from SGI, although they are representative of the overall U.S. industry, which today includes IBM, Compaq, and Sun Microsystems. These manufacturers are developing clusters of microprocessor-based servers to compete with high-end Japanese PVP and MPP products. The NEC SX-5 in the top list of the table is also a clustered machine based on vector units, although it is classified as PVP, because it is able to exploit modified PVP code and uses custom processors, rather than commodity microprocessors.

Table 1 also includes the clock hours involved in using various ocean grids on these machines (and resembles a table from 1988). The 1988 projections turned out to be fairly accurate, in that global ocean simulations have now been conducted with 1/2-, 1/4, and 1/6-degree models with 20 vertical levels [2, 8, 10]. A 1/8-degree simulation on an NEC SX-4 was planned but never conducted, since the U.S. Department of Commerce voided purchase of NEC equipment in 1997 by the National Center for Atmospheric Research (NCAR) in Boulder, Colo. Eventually, in 1998, a 1/10-degree 40-vertical-level calculation for the North Atlantic was run on a Thinking Machines CM-5 by scientists at Los Alamos [9] as a prelude to a fully global calculation there this year.

Typically, these ocean models are run for 10 or more simulated years, starting with climatological ocean data and applying daily surface fields from meteorological archives. These integrations are long enough to achieve upper-ocean equilibrium, assess grid resolution, and allow analysis of output for physical insights into ocean circulation, along with aspects of climate dynamics. Model results, including animations, are archived at central sites and distributed over the Internet upon request (see vislab-www.nps. navy.mil/~rtt and www.acl.lanl.gov/climate). These simulations have inspired detailed analysis and articles and reports by many scientists.

Back to Top

Ocean Currents and Their Instabilities

Illustrating typical results, Figure 1 shows sea surface temperature from a model with an average grid spacing of 1/6 degree and 20 vertical levels [2]. The calculations were performed using a grid with progressively finer horizontal spacing at higher latitudes; the spacing at the equator was 0.28 degrees, or roughly 18 miles, or 30 km. The instantaneous temperature field is rich in detail, showing evidence of strong east-west currents, eddies, tropical waves, and boundary currents. Although this simulation depicts most aspects of the real ocean, one deficiency is that the thin currents on the western sides of ocean basins travel too far poleward before separating from the coasts. Therefore, the model distorts both the Gulf Stream in the North Atlantic and the Kuroshio current near Japan in the North Pacific. Since a significant fraction of the ocean’s poleward heat transport occurs in such currents, they need to be depicted accurately when performing climate studies.

The model’s deficiency in reflecting physical-world current behavior is due to the model grid’s resolution being less than adequate; the results are much better when one examines a recent Atlantic calculation with 1/10 degree at the equator and 40 vertical levels [9]. Figure 2 shows characteristic instabilities of strong currents in the Atlantic model and in satellite observations, as depicted by the time variability of sea-surface height. In simulations at lower resolution, the Gulf Stream separates from the east coast of North America too far north and fails to turn northwestward into the Labrador Sea; moreover, the signature of the Azores Current extending east to the Strait of Gibraltar is missing. In the latest simulation (1998) created by Los Alamos investigators, all these features, along with many more, are reproduced faithfully.

Earlier evidence from NASA’s Jet Propulsion Laboratory in Pasadena, Calif., suggested that an increase in vertical levels to 40 might be enough to improve Gulf Stream separation at 1/6 degree; but the 1998 simulation at Los Alamos indicated that reducing the horizontal grid size to 1/10 degree is necessary to reproduce the observed offshore currents, as well as high levels of variability. Thus, a 1/10-degree grid at the equator and 40 vertical levels provide significantly better representation of the full suite of ocean phenomena than was possible in somewhat coarser grids. However, using this resolution globally requires machines that can sustain 100Gflops, and large blocks of machine time are needed to simulate many decades of ocean time.

Back to Top

Multiyear Ocean Variations

Fine-grid models can reproduce known climatic signals in the ocean, as shown in Figure 3 in a 10-year wave moving around Antarctica. The wave is found in time-longitude plots of the anomalous extent of sea-ice, showing up in both the satellite observations on the left and in the output of an ocean-ice model on the right. In this study conducted at the Naval Postgraduate School in Monterey, Calif., in 1998, 1/4-degree ocean grid spacing adequately depicts the strong currents encircling Antarctica; the wavelike features move eastward at the same speed as they are observed to move in the physical world. In earlier studies of this Antarctic circumpolar wave, coarser-grid ocean models using the same atmospheric conditions were unable to produce ice patterns moving at the correct speed. The high resolution of 1/4-degree grids is necessary for treating circumpolar ocean currents that affect climatic anomalies at high latitudes, while grid spacing of 1/2 degree or less is needed to avoid excessive damping of tropical ocean phenomena related to El Niño.

Back to Top

Climate Modeling

Climate models are based on detailed equations for the 3D hydrodynamic and physical processes that occur in the atmosphere, ocean, and sea ice; they employ complicated but efficient numerical methods to obtain solutions evolving over time for these equations [11]. Due to the need for long simulations with fine grids, climate modeling requires the full power of supercomputers, obtainable only by exploiting many processors simultaneously. Until recently, the number of processors typically ranged from 4 to 32, and the programming needed to exploit them was relatively straightforward. The computer’s memory was usually fully addressable by all processors, and small percentages of nonparallel code did not cause major slowdowns. That situation changed rapidly in the 1990s, due largely to microprocessor technology—with its characteristic cache-based architectures, distributed shared memories, and commodity chips—becoming the main approach to high-performance computing available from U.S. supercomputing manufacturers [4]. It is therefore imperative for climate-simulation scientists and software engineers to design and adapt climate models to take advantage of these architectures.


It is paramount that any effort to predict climate change and climate variations involves a more accurate treatment of the global ocean.


Over the past five years, scientists at NCAR have constructed a climate model to run mainly on PVP machines by building on their long-term success with the Community Climate Model of the atmosphere, the latest version of which is called CCM-3 (see www.cgd.ucar.edu/cms/ccm3). The model is used most often with 18 vertical levels and a horizontal representation using atmospheric waves down to an equivalent grid-point spacing of about 3 degrees (roughly 180 miles, or 300 km). In 1995, NCAR researchers joined the model to an ocean model of approximately 2-degree grid spacing and to a model of sea ice with the same coarse grid, calling the coupled model the Climate System Model (see www.cgd.ucar.edu/csm; its representation of the current climate was documented in a special issue of the Journal of Climate, July 1998 (see kiwi. atmos.colostate.edu/JoC).

Back to Top

Parallel Climate Model

Due to the growing need to target MPP and DSM machines for ambitious climate simulations, NCAR began a complementary effort in 1996 with support from the U.S. Department of Energy and collaboration from investigators in DOE-supported laboratories and universities (see www.cgd.ucar.edu/pcm). Led by Warren Washington of NCAR, the project’s initial goal was to build a Parallel Climate Model for MPP and DSM architectures, exploiting some of the earlier parallel techniques developed during a 1994–97 DOE program called Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP), which sought to move climate models onto MPP supercomputers. One particular version of CCM-3 could exploit up to 64 processors simultaneously. CHAMMP also supported development of the Parallel Ocean Program at Los Alamos and a sea-ice model at the Naval Postgraduate School.

To join these models, scientists and software engineers from Washington’s NCAR group adapted a flux coupling driver program (in 1997) from similar code in the Climate System Model. They ensured that all machine interprocessor communication in each component would be done by the Message Passing Interface, a standardized software tool facilitating portability across various platforms. The ocean was configured at relatively high resolution (2/3-degree average grid spacing, though finer than that near the equator and Antarctica), as was the sea ice (about 1/4-degree spacing). The 9:2 ratio of atmosphere to ocean grid sizes allowed the ocean’s important but smaller-size phenomena to be treated as accurately as their small-size counterparts in the atmosphere. Some of the mathematical speedup methods developed for CHAMMP allowed the high-resolution ocean and ice models to be used while consuming only about half the computing time of the full model.

Version 1.0 of the resulting DOE Parallel Climate Model was established in early 1998 [12]. More recently, the complementary Climate System Model and Parallel Climate Model have been converging toward a unified effort in terms of software engineering and model physics.

The Parallel Climate Model is now being applied on up to a few hundred processors using IBM and SGI machines at NCAR and DOE laboratories, including the National Energy Supercomputing Research Center (NERSC) at Lawrence Berkeley National Laboratory in Berkeley, Calif. Later applications will target large clusters of servers with a thousand or more processors. To examine the scaling of the nonatmospheric components onto a large number of processors, NCAR scientists ran simulations at NERSC on as many as 512 processors (see Table 2). The results in the table show excellent scaling; one year of simulation by ocean, ice, and the flux-coupling driver takes only about half an hour to complete on 256 processors.

Unfortunately, the CCM-3 was unable to scale beyond 64 processors with its usual number of atmospheric waves. Earlier CHAMMP work at Oak Ridge National Laboratory in Oak Ridge, Tenn., produced a parallel CCM-2 based on a 2D decomposition of the global domain, rather than the usual 1D approach employed by CCM-3 (see www. epm.ornl.gov/chammp/pccm2.1/index.html). So in 1998, CHAMMP investigators at NCAR adapted the Oak Ridge approach to map the CCM-3 model onto more than 64 processors. See Table 3 for test results from an SGI Origin-2000 at NCAR and a Cray Research T3E at NERSC. The 2D methods work as well as 1D methods on smaller numbers of processors up to 64, but the 2D approach scales fairly well to 256 processors. The 2D approach also scales well to many thousands of processors if the atmospheric grid size is reduced. The atmospheric model now takes about 1.3 hours to run on 256 processors.

Climate modelers and software engineers now expect that on a somewhat faster machine, such as a Cray T3E1200 with 256 processors, one year of simulation with the full Parallel Climate Model will take only about 1.5 hours of wall-clock time. The same holds for newer 128-processor IBM and SGI machines.

Back to Top

Ocean Climate Equilibrium

A strategy often used for initializing climate models involves three steps:

  • Integrate the atmospheric model for a decade with observed sea-surface temperatures and ice coverage as a lower boundary condition;
  • Repeat the last five years of atmospheric model output over and over to drive the ocean and sea-ice models for centuries, until equilibrium is achieved; and
  • Run the models in fully coupled fashion for several decades as a prelude to conducting climate experiments of interest.

This strategy minimizes climatic drift if the models are relatively compatible in the first place; otherwise, some surface flux corrections may be needed to keep the models from drifting into unrealistic coupled states. Fortunately, both the Climate System Model and Parallel Climate Model don’t need flux corrections.

Figure 4 shows an especially important result of applying the first two steps in the Parallel Climate Model. In 1998, Washington’s NCAR group used a Climate System Model-tested method to accelerate convergence of the solution out to the equivalent of 950 years by reducing the heat capacity of the deep ocean. The figure shows a long stream of water (in blue) at a depth of 500 meters circling clockwise near Antarctica in the South Atlantic and in the South Indian Oceans. The stream is fed from warmer waters south of Africa. The model’s overlying waters (not shown) are found to be overturning due to the expulsion of salt from newly formed drifting sea ice near Antarctica. The overturning water mixes with the warmer water flowing beneath it. The resultant waters then flow northward along the Antarctic Peninsula, with a net volume transport over all levels of 58 million cubic meters per second. As the flow turns eastward, about 15% of its volume descends along the continental slope to depths of 3–5 km, while remaining near the freezing point of sea water (28° F, or −2° C). This water mass spreads and slowly mixes with other water masses of North Atlantic origin to fill the abyss of the model ocean.

Noteworthy about this simulation of the South Atlantic is that these features agree with results from observations. They are especially important to the Earth’s climate, since they influence global ocean circulation and heat transport over long periods of time, as well as the sensitivity of the climate system to CO2-induced changes [1]. The excellent simulation of South Atlantic conditions in Figure 4 was achieved while using atmospheric model forcing, rather than meteorological observations; thus, it provides a good indication that the fully coupled model predicts actual climate conditions.

Back to Top

Simulating El Niño

Washington’s parallel climate model group also took the additional step of running the model in fully coupled fashion for several decades, and further integration of the coupled model was begun at NCAR. Almost immediately, the model began showing interannual variability in the Tropical Pacific, with clear similarities to El Niño and the Southern Oscillation. Figure 5 shows the response of the ocean at a depth of 40 meters during years 15 and 16 in the simulation. There was little indication of El Niño during years 14 and 15; but one year later, a patch of anomalously warm water forms in the western Pacific and moves eastward to create an elongated warm tongue that is 4°–5° C above normal. Inspection of other fields (not shown) indicates that the cause of the model’s El Niño is a series of westerly wind bursts in the western Pacific in the CCM-3; these bursts in turn trigger a “train” of ocean waves trapped close to the equator.

At the end of the six-month period in Figure 5, a harbinger of La Niña occurs in the western Pacific and acts to return the coupled system to near normal behavior in the model years 16 and 17 (not shown). Thus, the coupled model spontaneously develops interannual variability with the proper intensity and timing relative to the observed phenomena. Building on this and other successful evaluations of the fully coupled model, Washington’s group at NCAR has now used the Parallel Climate Model for specific simulations of 20th-century climate and climate change caused by various specified CO2 increases. Continuing efforts are being supported by the DOE’s Climate Change Prediction Program.

Back to Top

Prospects for Climate Modeling

Table 4 indicates the enormity of the undertaking of running high-quality climate models. Because the ocean and atmosphere are turbulent fluids that can evolve very differently from slightly different initial states, most simulations must be done in ensembles of 10 or more individual runs, not only to map the envelope of possibilities but to understand the preferred modes of climate variations and the limits of their predictability. As many as 15,000 simulated years are needed to quantify the ordinary decade-to-century oscillations, as well as the infrequent but abrupt changes in climate, such as those seen in very long-term climatic records [1].

When dealing with the historical climate of, say, 1850–2000, unknowns in initial conditions and past atmospheric composition and solar variations dictate a minimum of 20 simulations. The same logic applies to simulations of CO2-induced change, which should be integrated for at least 600 years to examine changes in heat transport caused by ocean overturning and the possibility of long-term recovery. Including atmospheric chemical reactions in simulations of sulfate and biomass effects can increase the requirements of each simulation tenfold; there should also be at least 10 studies of different industrial and agricultural inputs to the atmosphere. Finally, studying the mitigating effects of various strategies for CO2 regulation requires ensembles to be able to know the range of possible outcomes. The total machine time to accomplish all the studies, if one uses the Parallel Climate Model to simulate one year in 1.5 hours of clock time, is 130,500 hours, or 15 full years.

The immediate consequence of this calculation is that we need machines faster than the ones available today, even for the intermediate resolution of the Parallel Climate Model. Since typical 128-processor machines sustain about 10Gflops for the Parallel Climate Model, we already need machines of 100Gflops–1Tflop (a trillion floating point operations per second). These requirements increase dramatically if ocean grid sizes are decreased optimally about tenfold to 1/10 degree, and atmospheric grid sizes are decreased tenfold; an associated decrease in time step produces a net thousandfold increase in machine requirements.

Table 5 indicates how grid refinement might be done gradually as future machines approach 1,000Tflops (or 1Pflop), assuming the atmosphere:ocean grid sizes maintain the desirable Parallel Climate Model ratio of 9:2. (Petaflop performance is achievable, according to computer experts, but will involve considerably more parallelism than is available today [4].)

If machine speed increases along these approximate time lines, then much progress in definitive climate modeling can occur over the next 20 years, with important benefits to society. As an intermediate goal, a 10Tflop level of computing by 2005 would allow the full suite of climate simulations in Table 4 to be accomplished in 18 months using a 1/6-degree, 40-vertical-level ocean model and a 3/4-degree atmospheric model. This fourfold increase in resolution relative to the Parallel Climate Model for both ocean and atmosphere would allow major improvements in the amount of reliable regional information that could be provided by climate simulations. U.S. national planning recently began for developing 10Tflop computing power through programs seeking to advance the goals of simulation science in such areas as climate research and combustion modeling (see www.er.doe.gov/ssi).

Back to Top

Conclusion

Quantitative modeling of global ocean circulation requires 1/10-degree grids on machines sustaining 100Gflops, although many aspects of ocean climate variability can be studied at somewhat lower resolution. All current climate models compromise ocean and atmospheric resolution and still cannot accommodate the need for many ensembles of multicentury simulations. The DOE’s Parallel Climate Model effort shows that all climate model components can be parallelized efficiently for MPP and DSM architectures to take advantage of emerging 100Gflop–1Tflop machines and to begin performing a full suite of climate simulations. Truly definitive prediction of climate variations and regional climate change can be addressed as highly parallel machines sustaining 10–1,000Tflops become available over the coming decades.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Instantaneous sea surface temperature from a 1/6-degree global ocean simulation [

F2 Figure 2. Regions of strong unstable ocean currents, indicated by the standard deviation of sea-surface height (above) from a 1/10-degree Atlantic model and from satellite observations (below) [

F3 Figure 3. Decadal climate variations in sea ice near Antarctica in satellite observations (left) and output of a 1/4-degree ocean-ice model (right).

F4 Figure 4. Time-averaged currents and temperature at 500-meter depth in the South Atlantic and the Southwest Indian Oceans (from a 950-year simulation of ocean and ice driven by NCAR’s Community Climate Model).

F5 Figure 5. Simulation of El Niño by the DOE’s Parallel Climate Model in the equatorial Pacific at 40-meter depth between latitude 5 S and 5 N, including average temperature of the model’s years 15 and 16 and six successive monthly anomaly patterns.

Back to Top

Tables

T1 Table 1. Ocean simulation requirementson parallel machines.

T2 Table 2. Parallel Climate Model timings, or wall-clock time per model year in hours, on a Cray T3E900 without the CCM-3 atmospheric model.

T3 Table 3. CCM-3 atmospheric model timings, or wall-clock time per model year in hours, on an SGI Orgin 2000 and a Cray T3E900.

T4 Table 4. Coupled model simulations needed to understand climate variations and change.

T5 Table 5. Future climate simulation requirements for fully coupled runs (assuming atmosphere: ocean grid size ratio of 9:2).

Back to top

    1. Broecker, W. Thermohaline circulation, the Achilles heel of our climate system: Will man-made CO2 upset the current balance? Sci. 278, 5343 (Nov. 28 1997), 1582–1588.

    2. Maltrud, M., Smith, R., Semtner, A., and Malone, R. Global eddy-resolving ocean simulations driven by 1985–95 atmospheric fields. J. Geophys. Res. 103, C13 (Dec. 15, 1998), 30825–30853.

    3. Mann, M., Bradley, R., and Hughes, M. Global-scale temperature patterns and climate forcing over the past six centuries. Nat. 392, 6678 (Apr. 23, 1998), 779–787.

    4. Messina, P., Culler, D., Pfeiffer, W., Martin, W., Oden, J., and Smith, G. Architecture. In the special section The High Performance Computing Continuum. Comm. ACM 41, 11 (Nov. 1998), 36–44.

    5. Philander, S. El Niño, La Niña, and the Southern Oscillation. Academic Press, San Diego, 1990.

    6. Semtner, A. Modeling ocean circulation. Sci. 269, 5229 (Sept. 8, 1995), 1379–1385; see also web.nps.navy.mil/~braccio/science/semtner.html.

    7. Semtner, A. and Chervin, R. 1989: Breakthroughs in ocean and climate modeling made possible by supercomputers of today and tomorrow. In Proceedings of Supercomputing'88, J. Martin and S. Lundstrom, Eds. (Orlando, Fla., Nov. 14–18). IEEE Computer Society Press, Washington D.C., 230–239.

    8. Semtner, A. and Chervin, R. Ocean general circulation from a global eddy-resolving ocean model. J. Geophys. Res. 97, C4 (Apr. 15, 1992), 5493–5550.

    9. Smith, R., Maltrud, M., Bryan, F., and Hecht, M. Numerical simulation of the North Atlantic Ocean at 1/10 degree. J. Phys. Oceanogr. 30 (2000); see also www.cgd.ucar.edu/oce/bryan/woce-poster.html.

    10. Stammer, D., Tokmakian, R., Semtner, A., and Wunsch, C. How well does a 1/4-degree global circulation model simulate large-scale oceanic observations? J. Geophys. Res. 101, C10 (Nov. 15, 1996), 25779–25811.

    11. Washington, W. and Parkinson, C. An Introduction to Three-Dimensional Climate Modeling. University Science Books, Mill Valley, Calif., 1986.

    12. Washington, W., Weatherly, J., Semtner, A., Bettge, T., Craig, A., Strand, W., Wayland, V., James, R., Meehl, G., Branstetter, M., and Zhang, Y. A DOE Coupled Parallel Climate Model with High-Resolution Ocean and Sea Ice: An Update. Internal NCAR rep., Boulder, Colo., Mar. 1999; see www.cgd.ucar.edu/pcm/new_update/index.html.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More