Research and Advances
Artificial Intelligence and Machine Learning

A Hierarchical Framework For Parallel Seismic Applications

Supporting the use of frameworks in the distributed parallel platform.
Posted
  1. Introduction
  2. Hierarchical Framework Approach
  3. A Parallel Seismic Application Framework
  4. How to Extend and Use the Framework
  5. A Parallel Framework-based Development Environment
  6. Conclusion
  7. References
  8. Authors
  9. Footnotes
  10. Figures

Organizations are increasingly contending with a parallel and distributed software crisis [4]. In particular, the growing heterogeneity of hardware architectures and diversity of communication platforms make it difficult to build parallel and distributed applications from scratch that are correct, portable, efficient, and inexpensive. Object-oriented application frameworks are considered promising technologies for improving software productivity and increasing the reusability, extensibility, and portability of software [4, 8]. Currently, there are some mature general-purpose frameworks such as ET++ and ORB, as well as domain-specific application frameworks such as Gebos [1, 3]. In order to extend the application scope of the framework techniques, some attention is paid to the parallel framework. One of the representative works is parallel OO methods and applications (POOMA) [7] for particle-in-cell (PIC) simulations.

Different from POOMA, we took oil and gas exploration as the application domain, and constructed and implemented a hierarchical parallel application framework for pre-stack depth/time seismic migrations. Based on this framework, we have developed a hierarchical OO parallel environment (HOOPE). The emphasis of this article is on the construction and use of a parallel application framework.

We have been working for many years in the oil and gas exploration domain, developing several parallel software systems based on the parallel virtual machine (PVM) and the message-passing interface (MPI) [2] on different platforms. In the past five years, most efforts in implementing parallel software have focused on code maintenance and software transplanting in this application domain. Exploring new algorithms on the existing software is rather difficult, and new parallel applications must be designed and implemented from scratch.

In this application domain, three-dimensional exploration and pre-stack seismic analysis techniques can dramatically improve the quality and fidelity of geophysical data and produce much sharper geological images. Because large-volume data and time-consuming computation are two main characteristics of seismic data processing, parallel computers must be used to support the 3D pre-stack seismic computation. For instance, a pre-stack depth migration with one million seismic traces needs several months to be computed with a single-processor computer. And only elaborately made parallel software with a proper computation model can make the pre-stack depth migration possible and productive.

From the parallel workflow of pre-stack migration (see Figure 1a and Figure 1b), we find there are two main common steps taken for pre-stack depth migration: traveling timetable computation with velocity data volume and seismic imaging applied to a seismic data cube. In each step, different algorithms can be used interchangeably for a particular result. We also notice that the data itself (velocity and seismic cube) is naturally decomposed into sub-volumes, and calculated individually, although there is intercommunication between processes. Therefore, we can at least extract the following common aspects for the parallel migration:

  • Control flows for traveling timetable and seismic imaging, which allows new algorithms to be easily developed.
  • Data decomposition, intercommunication, and integration strategies.
  • Multiple processes scheduling and synchronization.

Illustrating how to organize these common aspects and create a reusable, component-based framework for parallel pre-stack migration development is one of the objectives of this article.

Back to Top

Hierarchical Framework Approach

In early 1998 we began this project by looking at the main problems encountered when designing and developing parallel migration applications. Besides the algorithm itself, developers have to take care of many other issues mentioned here. In fact, these commonly used aspects can be abstracted and reused. Other important nonfunctional aspects, which must be taken into account, include software extensibility and portability. Consequently, all these elements mixed together make it rather difficult to design and develop a parallel application.

An OO application framework is a reusable design of all or part of a system that is represented by a set of classes and the way their instances interact [5]. It is the skeleton of an application that can be customized by an application developer. With the help of framework techniques, we can create a commonly used skeleton that allows parallel migration customized with different algorithms. The framework abstracts the parallel issues, and provides programmers with a clear interface for parallel migration application development and strategy for code reuse.

A hierarchical architecture pattern is used for this skeleton design. In this skeleton, we encapsulate the parallelism-related issues into the lowest level, namely, the parallel abstraction layer. The second abstraction level is the global abstract data type layer, which provides basic data types responsible for data representations and operations for depth migration. The third abstraction level is the functional component layer, which provides a set of complementary function modules for migration. The top abstraction level here goes to the application programmers for developing certain parallel migrations by employing the lower layers.

These broad categories from basic environments to applications provide an effective means to encapsulate parallelism-related issues, abstract the kernel structures of applications, and incorporate domain-specific knowledge into the abstract skeletons. As a result, application developers can develop their parallel applications at different abstraction levels by reusing data representations, data layout, and communication strategies. Structurally, the preceding abstraction levels are organized in the OO context with the goal of portability and extensibility. Generally speaking, the objects in one layer can only use the services provided by the objects in the same layer and its lower layers, and at the same time, they also provide services for the upper layers. But lower layers cannot see layers above them. In this way, every newly added data type and component can find their layer easily, and know what services can be used to implement its function. So extensibility is supported. On the other hand, portability is enhanced because the dependent relation is reflected in structure. For example, if parallel applications use only the component layer and global abstract data type layer, they are portable across various platforms as long as the parallel abstraction layer works.

Back to Top

A Parallel Seismic Application Framework

Based on the hierarchical architecture discussed here, a domain-specific application framework for parallel seismic imaging applications is constructed. The framework consists of four layers, as shown in Figure 2 and Figure 3.

The Application Layer provides a software skeleton for seismic imaging applications, such as pre-stack depth/time migrations and other relating algorithms. The Functional Component Layer consists of objects of seismic functions such as Convolution, Correlation, Matrix Solver, and Data-Engine. The Global Abstract Data Type Layer, supported with Parallel-Arrays in the lower layer, provides users with a data-parallel representation and a set of APIs for a variety of seismic imaging data types. The abstract data types included in this layer include:

  • VField, a class for seismic wave traveling timetable computation by ray tracing. This class distributes velocity volume among the nodes with reasonable ghost boundary according to the migration aperture, and invokes the default operator to generate the timetable. The default operator can be overloaded by its subclass.
  • Image, the class for depth migration with timetable. The class gives a default operator for migration, and it can be overloaded with new algorithms.
  • Engine, the class for constructing initial parameters for imaging, and distributing that data to local nodes.
  • RawData, the class for importing seismic data volume into memories of local nodes. It provides a simple interface for the parallel data management.
  • ImageOut, the class for integrating the partially migrated data by the local processors, and exporting the final result into the designated processor.

The Parallel Abstraction Layer, based on the MPI, provides a set of Parallel-Arrays with domain-specific features and global indexing mechanism. It mainly consists of a C++ Template Array, which is used for representing seismic data volume layout among multinodes with overlapped boundaries (ghost boundary), and an Overlay Array with the ghost boundary whose dimensions are larger than that of the local array. Objects in this layer are responsible for capturing key features of parallel programming, including making data decomposition and inter-processor communication strategies for seismic imaging, ghost boundary updating, and global array indexing.

The pre-stack depth migration mainly consists of two computation steps: timetable computing and seismic imaging. Figure 4 shows the object interactions in the first step with a brief sequence diagram. The Migration object starts this computation by calling the computeGreenTB() method of the VField object. In this method the VField object implements a ray tracing algorithm with various operations provided by the Array objects. The Array objects delegate these operations to their LocalArray objects that perform the real computation. And their ArrayDescriptor carries out necessary transformation and communication.

Back to Top

How to Extend and Use the Framework

The framework can be extended using three techniques: class-based inheritance, class-based aggregation, or subsystem replacement. For instance, the application developer can use this framework to customize parallel pre-stack time/depth migrations and prototype new algorithms. And new data types and data layout strategies for other parallel applications can also be created. The framework provides the following typical ways to accomplish this:

  • Instantiate the framework directly with its default operators to create a pre-stack depth migration application. Or customize the framework with new migration algorithms by subclassing the existing classes. For example, we can define a class KmigVfield inheriting class VField, and redefine the operation computeGreenTB in class VField with a different algorithm (class-based inheritance extension).
  • Add new data layout strategy to the parallel class. In the first version of HOOPE, the class Array<T> only has the strategy of block partition for large volume of data with static ghost boundary. Due to the system’s layered structure, the new layout algorithm with a different ghost boundary can be inserted by overloading the same partition operations without interfering the other layers.
  • The Parallel Abstraction Layer can be replaced by different implementations over other hardware architectures according to the same interface; hence the applications developed with the framework can be portable across different architectures (subsystem replacement extension).
  • Extend the framework with new abstract data types. By using aggregation, we can create new parallel elements by utilizing the parallel objects in the lower layers. For instance, a new abstract parallel data type DeMO could be inserted into upper layer by using parallel array object and data I/O object in the Parallel Abstraction Layer (class-based aggregation extension).

Back to Top

A Parallel Framework-based Development Environment

Based on the preceding parallel application framework, we designed and developed the parallel development environment HOOPE to support the use of the frameworks in the distributed parallel platform, in which a variety of supercomputers in geographically dispersed locations are linked by high-speed networks. One supercomputer is a special case of this general platform. Included in the HOOPE are three parts: HOOPE-FrameWork, HOOPE-Agents, and HOOPE-Interface, as shown in Figure 5.

The HOOPE-FrameWork implements a component-based parallel application framework for the pre-stack migration on one supercomputer. In doing this, it provides a new mobile-agent-based coordinated mode for the flexible and efficient coordination of parallel frameworks on different supercomputers connected by high-speed networks. The HOOPE-Interface supports a uniform parallel workspace interface for remote users to use these frameworks transparently. Because parallel application frameworks can be customized and the translation from parallel workflow to mobile-agent-based coordination can be automated, this parallel environment as a whole can be viewed as a single virtual distributed parallel framework.

Back to Top

Conclusion

After the HOOPE environment was developed, we conducted some experiments and performance analysis on its main parts [6]. For example, we have used this framework to develop parallel 2D pre-stack depth migration. Our experiments show the development time for parallel depth migration with the framework (1 person-month) is approximately three times shorter than by directly using MPI (4 person-months). The performance analysis also shows that the average speed-up is about 3.3 (4 CPUs, shared memory Sun HPC3000) [6]. We have also developed and conducted some other experiments on Unix-based workstations, MPI clusters of workstations, and parallel architectures on shared-memory machines.

Our experiences have demonstrated that parallel frameworks are difficult to construct, typically because there are a lot of issues to be considered when developing such frameworks, including computation model, data layout, intercommunication and synchronization. In addition, some nonfunctional aspects have to be taken into account, such as framework’s extensibility and portability. The hierarchical architecture is one of the most effective approaches for designing and developing parallel application frameworks with extensibility and portability. For instance, we did some experiments porting the framework from shared-memory machines to MPI cluster workstations effortlessly, because only the parallel abstraction layer usually needs to be changed. Parallel frameworks have proven helpful to fast prototyping parallel algorithms, and speeding up parallel application development.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1A F1B Figure 1. (a) Traveling timetable computation. (b) Pre-stack depth migration computation.

F2 Figure 2. Layered structure of the parrallel frameworks.

F3 Figure 3. The brief class diagram of the application.

F4 Figure 4. Part of the sequence diagram of the framework.

F5 Figure 5. HOOPE architecture.

Back to top

    1. Birrer, A. and Eggenschwiler, T. Frameworks in the financial engineering domain: An experience report. In O. Nierstrasz, Ed., Proceedings of ECOOP '93 (Kaiserslautern, Germany, July 26–30), LNCS 707, Springer-Verlag, Berlin, 1993.

    2. Croup, W., Lusk, E., and Skjellum, A. Using MPI: Portable Parallel Programming with the Message-Passing Interface. The MIT Press, Cambridge, MA, 1994.

    3. Dirk, B., Gryczan, G., et al. Framework development for large systems. Commun. ACM 40, 10 (Oct. 1997), 52–59.

    4. Fayad, M.E. and Schmidt, D.C. Object-oriented application frameworks. Commun. ACM 40, 10 (Oct. 1997), 32–38.

    5. Johnson, R.E. Frameworks=components+patterns. Commun. ACM 40, 10 (Oct. 1997), 39–42.

    6. Li, Y. The research and implementation of object-oriented parallel application frameworks. Ph.D. dissertation, Nanjing University, P.R. China. October, 1999.

    7. Reynders, J., et al. POOMA: A framework for scientific simulations on parallel architectures. In G.V. Wilson and P. Lu, Eds., Parallel Programming using C++. MIT Press, 1996; www.acl.lanl.gov/pooma/

    8. Schmidt, D.C. and Fayad, M.E. Lessons learned building reusable OO frameworks for distributed software. Commun. ACM 40, 10 (Oct. 1997), 85–87.

    This work was supported by the National Science Foundation of China under Grant No. 69873021 and the National Excellent Young Scientist Foundation of China under Grant No. 61525204. The HOOPE-FrameWork was partly supported by Beijing Global Software Corporation.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More