There are as many definitions of the object-oriented programming paradigm as there is literature trying to define the paradigm. However, if we go back to the roots, to the Simula language, we can state that OO programming is, first of all, about mapping a physical model of a problem domain into a program. An OO program is aimed at reflecting the structure of the application domain through a one-to-one correspondence between objects of the application domain and objects of the computational model. Bjarne Stroustrup defined OO programming as follows: "Decide which classes you want; provide a full set of operations for each class; make commonality explicit by using inheritance." Where there is no such commonality, data abstraction suffices ("decide which type you want and provide a full set of operations for each type").
If the application domain is, say, the administration of a university, then the students and university employees are represented by objects, and the concepts of student, graduate student, teaching assistant, professor, and so forth, are represented by classes of an inheritance hierarchy. This modeling-based approach improves the productivity and quality of software development together with various forms of reuse, both at the level of an individual class and at the level of a group of classes, (such as a library or a framework). An application domain for which object orientation has been particularly successful is that of graphical interfaces. Following an OO programming approach, concepts like window, mouse, and button are represented by object classes. A distributed system is just another example: it can be viewed as the application domain for which we should identify and classify the fundamental abstractions. Following this approach in a rigorous way is challenging. It means structuring a distributed system in the form of a class library all the way down. It also means going beyond current practices, which consist in wrapping built-in toolkits with OO-like dressing.
The Outfit Does Not Make The Monk
Since object orientation has become a respectable denomination in the industrial arena, there has been an important tendency to wrap built-in prototypes or products behind OO interfaces. This dressing is usually accompanied with a marketing campaign advertising the benefits of the new OO line of the system. This has been particularly true for various distributed programming toolkits that have been made CORBA compliant through a careful redesign of their interfaces in the Interface Description Language (IDL) designed by the OMG (www.omg.org). While an OO interface tends to make the functionality of the distributed toolkit easier to understand, it does not fundamentally improve its modularity, extensibility, or the reuse of its components, all concepts defining what OO programming is indeed about.
The question here is related to granularity. Most toolkit products for distributed programming are considerably large, monolithic entities. Wrapping them behind an OO-like interface and calling them services (versus toolkits) does not make them object oriented. To make this point of view more concrete, consider three, a priori unrelated, examples of services, the interfaces of which have been (or are in the process of being) standardized by the OMG: the CORBA transaction service, the CORBA event service, and the future CORBA service aimed to achieve fault-tolerance through entity redundancy.
Since object orientation has become a respectable denomination in the industrial arena, there has been an important tendency to wrap built-in prototypes or products behind OO interfaces.
The transactional service is described as a set of interfaces for transaction manipulation and atomic commitment. In particular, the transactional service provides an interface for a two-phase commit protocol. The event service is a set of interfaces for event manipulation along a publish/subscribe communication pattern. A set of consumers can subscribe to a channel and subsequently receive, through an asynchronous multicast, some of the events put into the channel. Finally, the fault-tolerant service is based on the idea of entity redundancy through some notion of group. Roughly speaking, the failure of an object is hidden by the group.
The interfaces provided (or under specification) by the OMG are aimed at describing the general functionality of each of the services. The services are considered separately and their underlying common components are not captured. An OO design of the underlying problem domain would have led to the capturing and factoring out of common underlying concepts such as persistence, failure detection, multicast, and distributed agreement. Furthermore, given the large-grain level at which the interfaces are provided, it is not clear at all how one could extend any of these services. Extensibility does not seem to have been a major concern when designing the interfaces.
For example, it is all but straightforward to extend the transactional service and provide an alternative commitment protocol to the two-phase commit protocol provided by default. There are many cases where other forms of commitment are more appropriate. Identifying a very generic notion of agreement that could be subclassed to provide different forms of distributed commitment would have been a more OO approach. In fact, such a generic notion of agreement could be used also for the fault-tolerance service, for instance, behind group membership, and for the event-service such as to guarantee, when required, some form of atomicity that would capture a globally ordered view for multicast event delivery.
Furthermore, there are many distributed applications that require transactional, fault-tolerance and publish/subscribe semantics. The fact that the services providing each of these semantics are considered separately can turn out to be a major handicap for the application developer who is willing to make use of those services. In contrast, searching for the commonalties underlying those services and abstracting them out could indeed be of major help for the developer.
The preceeding observations are, by no way, criticisms of CORBA. The OMG has specified the most advanced distributed system infrastructure that turns out to be viable from an industrial perspective. Our observations mainly state that complying with those specifications does not make a distributed service object oriented, nor does it mean that the service is more modular and extensible or that its components are easier to share and reuse.
In Search of the Holy Grail
Designers of high-level distributed frameworks should go first through identifying the basic abstractions, and second through building libraries that help to understand those abstractions and their interactions. These steps cannot be achieved without a deep understanding of the abstraction domain, that is, distributed computing. The most fundamental questions in designing basic abstractions are first technical questions. Classification and specialization mechanisms, as offered by OO languages, are then appropriate to organize a library/hierarchy of higher-level abstractions.
Finding out the right way to represent distribution-related concepts (message, machine, process, failure detection, agreement, to name a few) as first-class programming entities, and then classifying them within hierarchical libraries and frameworks are indeed challenging issues. Higher-level abstractions, such as various forms of remote method invocations and distributed event processing, could then be built out of such basic abstractions. A programmer can use a high-level abstraction of a remote method invocation if it fits the requirements of the application. But nothing should prevent him or her from directly using the more fundamental abstractions and manipulating them as first-class entities.
Furthermore, following the Hollywood principle, "Don’t call us, we’ll call you," parts of the abstractions should be left to be customized by the application programmer, as one rarely finds, in any given library, the abstraction that is exactly needed in the application. There are many cases where some form of message passing or event-oriented communication is more appropriate than remote method invocation. One can, indeed, build an event-oriented communication system using threads on top of a remote method invocation mechanism. But this is by no means a natural construction and would lead to considerable overhead. Instead, the more basic abstractions of messages and communication port should be made available as first-class objects.
Consider the following analogy with concurrent programming. There are many cases where specific entities of an application are best represented by active objects (with its own thread of control). One can certainly build an abstraction of an active object model in a concurrent language but nothing should prevent the programmer from using the more fundamental abstractions like threads and semaphores. Thanks to its well-defined interface (wait and signal operations) and its well-known behavior (train metaphor), the semaphore now represents a standard abstraction for concurrent programming, just as arrays and sets are basic abstractions for manipulating data structures in sequential programming.
Conclusion
We argue that OO distributed programming is about viewing distribution as the application domain from which fundamental abstractions should be extracted and classified. In other words, OO distributed programming is not about hiding distribution, but precisely about exhibiting and understanding the very fundamental characteristics of distribution. This is a challenging task that should not be confused with some current practices that consist in wrapping existing distributed systems with OO-like interfaces. High-level abstractions such as remote method invocation are indeed useful, but should be viewed merely as abstractions, themselves made out of lower-level abstractions that are represented as first-class citizens.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment