Research and Advances
Architecture and Hardware

The Relationship Between Distributed Systems and Open Software Development

The behavior and performance of OSD is best appreciated as a distributed system.
  1. Introduction
  2. Distributed Systems
  3. The Bazaar as a Distributed System
  4. The Web
  5. Compromise: Control vs. Performance
  6. The Future: A Certain Tension
  7. References
  8. Authors

Open source development (OSD) is a revolutionary new model for software development. Or at least that’s what many people want us to think. Most of the growing open source community, which now boasts several corporate-sponsored projects such as Netscape’s Mozilla, cites Linux operating system developer Linus Torvalds as the messiah of this new model. Eric Raymond, in his important paper, "The Cathedral and the Bazaar," claims that Torvalds’ cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his quasi-guidance of the Linux development model—a best practice extraordinaire of OSD [7]. But what is this model and why does it work? Here, we explore these questions by examining the OSD model as a classically defined distributed system.

Raymond contrasts traditional software models—models rooted in traditional rigor and precision—to the OSD model, typified by the Linux community. In short, he portrays the OSD model as a bazaar, full of seeming chaos and irreverence, and contrasts that environment to the sacred intricacy of the work of building software cathedrals. As Raymond describes, the Linux project was not "quiet, reverent cathedral-building … rather … a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, which take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles" [7].

Torvalds was able to persuade thousands of top-notch developers from around the world to collaborate on his project. Through chat rooms and news groups he was able to recruit the hacker masses. No one could question the cumulative intellect of the project, but the management of such a complex project would surely require Spartan discipline and regimen. If a corporate setting with the convenience of person-to-person communication, powerful source control tools, multiple levels of management, and the full resource backing of large companies (that is, miraculous amounts of cash), still could not achieve acceptable success rates, how could Torvalds hope to effectively command a loosely coupled horde of a physically dispersed, autonomous, and heterogeneous programmers? Via stern news group postings? This characterization of the Linux project should bring to mind textbook definitions of distributed systems. Again, one of the main assertions of this article is that the behavior and performance of the OSD model is best understood as a distributed system.

Back to Top

Distributed Systems

A distributed system can be defined as "a collection of loosely coupled processors interconnected by a communication network" [8]. This same reference also describes the four major advantages of distributed systems as being resource sharing, computation speedup, reliability, and communication. A distributed system can be many things but the gist is the components that constitute them are dispersed geographically and still must work together as if they were processes merely dispersed across the address space of a shared memory with access to a shared CPU. They still perform tasks that single computers perform, but they must allow the components to be located transparently across time and space.

Implementations of distributed systems face many challenges primarily arising from the heterogeneity of constituent components, the synchronization of remote processes, and the maintenance of data consistency. Resolution of these issues is a baseline requirement for any truly distributed, non-trivial system. In a computer-based distributed system, heterogeneity arises primarily from varying operating systems, architectures, file systems, and languages. Coordination of processes—a complex issue even in centralized architectures—becomes a hundredfold more complex when processes are spread over time and space. Data consistency, due to necessary replication, demands careful attention so that the work of the system is not undermined.

The mantra of distributed system design may be compromise, compromise, compromise—control for performance. Strict micromanagement of functionality and data integrity can quickly pull performance below acceptable levels. This lesson can easily be illustrated by an example from the distributed memory realm of distributed systems. The chief concerns in distributed memory involve data consistency. While a set of distributed processes may all access a single logical data structure during execution, in reality they typically access one of a number of discrete replicas of that logical data. If these replicas become inconsistent in a manner that changes the expected behavior of the system, the system fails. Implementing absolute control over the data would prevent this but would also fully undermine system performance.

The distributed memory solution to data consistency is a pragmatic one. It turns out that strict consistency is rarely critical. When it is, we address it with absolute precision. Otherwise, we loosen control. If a distributed memory system tries to maintain a strict consistency model, then performance will suffer tremendously [2]. The key is to identify the level of consistency control that actually meets, but does not exceed, the consistency needs of the system. This is the basis of weak consistency models under which models we strive to meet only the level of consistency that is vital. We need to strive toward "better performance" by restraining from being "unnecessarily restrictive" [2].

Back to Top

The Bazaar as a Distributed System

Back to Torvalds and the bazaar. The bazaar model of software development can be thought of as a wild band of hackers working together to solve complex software problems. The model follows a seemingly unstructured path of continual releases of versions distributed among developers everywhere who immediately attack the system with the passion of dedicated hackers. "With Linux, you can throw out the entire concept of organized development, source control systems, structured bug reporting, or statistical analysis. Linux is, and more than likely always will be, a hacker’s operating system—what we mean by ‘hacker’ is a fervishly dedicated programmer … [not a] wrongdoer or an outlaw" [10]. This loosely coupled bunch of autonomous, sometimes even anonymous, hackers demonstrates the performance gains of distributed systems, producing robust software in a timely manner. Raymond draws the obvious comparison between "the cost of the 16-bit to 32-bit transition in Microsoft Windows with the nearly effortless up-migration of Linux during the same period, not only along the Intel line of development but to more than a dozen other hardware platforms including the 64-bit Alpha as well" [7]. Bugs are often located and fixed within a matter of hours. And, surprisingly, the bugs are found at much higher and at much deeper depths than in traditional systems. As Raymond points out, in an insightful, if not intentional, reference to the connections between OSD and distributed systems, "Debugging is parallelizable."

The OSD model as a distributed system faces similar challenges as well. Source control, with a random set of hackers pecking away at the latest Linux release, is pretty much out the window. But, again, it seems the bazaar functions as all good distributed systems should. Just as the weak consistency model of distributed shared memory balances consistency needs with performance needs, so does the bazaar. Consistency will work itself out in the next go-round. If Torvalds worried too much about source control the system would bog down instantly. With OSD, if you cannot risk a buggy release you simply go back to an older more stabilized version. Yet you can bet the product will be substantially better in only a few weeks.

Back to Top

The Web

Since the Web is the world’s most prolific distributed system as well as the most chaotic and bazaar, the OSD model would seem to be a good candidate for Web software projects. In fact, we should note that if it were not for the Web, the OSD community would not even exist: there would be no communication system to support the distributed collaboration. Raymond describes one OSD project in which he worked for several years and never met the key developers in person, despite daily collaborative work. This is true transparency. Here, we investigate the union and fit of OSD and the Web.

The Web is certainly a distributed system. When discussing the original intentions and motivations behind the Web, Tim Berners-Lee, its inventor, usually describes lofty notions of unlimited access to unlimited resources of information. The Web sought to link all the information in the world with hypertext and grant universal access to all, thereby increasing the global depth of humanity. The utopian themes are more than slight in Berners-Lee’s early writings. He speaks of "social understanding" and "a single universal space" [1]. He even compares the Web to the Unitarian church. One would think he must be somewhat disappointed to find that much of the single universal space currently holds explicit pictures of human intercourse. Social understanding, perhaps?

So, the Web has not gone in the directions it originally intended. However, it has demonstrated one of the most vital characteristics of distributed systems—openness. In order to handle heterogeneity and extension, distributed systems must be built on open and extensible technologies. The Web demonstrates this openness through implementing simple and open standards of communication and software. No realm of software has been more open than the Web, and no realm of software has seen similar levels of flexibility, extension, and speed of growth.

The most critical feature of the Web as a distributed system is its protocol-based communication. There is no room for closed proprietary technologies in a system that expects, as Berners-Lee envisioned, every user in the world to have access to its content despite their heterogeneity. A single universal space requires a single universal language—TCP/IP. The original specifications of a layered protocol stack upon which heterogeneous distributed systems could build their communication systems, and upon which TCP/IP was closely modeled, is of course the Open Systems Interconnect model [5]. The use of layered protocols specified in open standards allows anyone to implement these communication protocols on their systems, thus building the foundation for transparent and open communication.

Back to Top

Compromise: Control vs. Performance

One of the most successfully flexible open protocols is HTTP. Yet, Berners-Lee, who designed it, complains the original intent of this protocol was to support communication between distributed objects. He openly bemoans the misuse of this protocol, as it has morphed into a request and reply system between client browsers and Web servers, claiming the roles now filled by CORBA and RMI could have been—and should have been—better filled by his HTTP. "HTTP was originally designed as a protocol for remote operations on objects, with a flexible set of methods. The situation in which distributed object-oriented systems such as CORBA, DCOM, and RMI exist with distinct functionality and distinct from the Web address space causes a certain tension [emphasis ours], counter to the concept of a single space" [1]. It’s truly ironic that Berners-Lee fails to recognize that, despite current misuse, this issue only underscores the strength of these open protocols. One cannot help refer to his own social and philosophical reflections on the Web. In a social distributed system, of which the Web and OSD communities are two supreme examples, openness means the direction of growth cannot be tightly controlled. With too much precise control, the performance of a distributed system soon fails.

This notion that openness lies at the heart of the Web’s functionality extends much higher than the communication protocols upon which the low-level communication of the Web functions. As noted earlier, Netscape has moved to make its client software, formerly known as a browser, an open source project called Mozilla. This is probably a corporate strategy that surely aims at an eventual proprietary cutoff. Netscape admits its ultimate goal is to use the OSD model to proliferate the growth and distribution of its client software, which will then "further seed the market for Netscape’s enterprise solutions and Netcenter business" [6]. To produce a quality and user-satisfying product, which will lead to its hopeful ubiquity, Netscape has turned to the open source model. However, one might want to put Netscape in touch with Berners-Lee on one minor point. It might find the development of its client software, under the open source energy of the Mozilla project, might not strictly adhere to the consistency of their corporate strategies. This, as Berners-Lee might point out, might cause a "certain tension."

Back to Top

The Future: A Certain Tension

It will certainly be interesting to see if the powers of capitalism can harness the energies of OSD without destroying it. The commodification of production, according to Karl Marx, alienates the worker from his creative output. According to Marx, and Scott Adams (creator of the Dilbert cartoon strip), alienation of the worker leads to his or her dehumanization. In both Marx and Adam’s views, this may lead to worker rebellion, or, a least, lead to extremely bored and unmotivated workers. They show up at work only for the money, or because they no longer possess the will to leave. It’s important to listen to Raymond’s description of the fervent hackers of the open source community and their motivational and creative energies: "We have fun doing what we do. Our creative play has been racking up technical, market-share, and mind-share successes at an astounding rate. We’re proving not only that we can do better software, but that joy is an asset" [1].

So, as everyone seems to recognize, the OSD model produces an unprecedented quality of feature-rich software in amazing time frames. And it does this by dismissing the highly centralized and controlled, if not oppressive, forms of traditional software engineering. Yet, a certain tension exists between these fervent hackers and those that angle to make a traditional capitalistic buck off of their production. The tension between the market and the software can be easily illustrated by looking at specific features of the Linux system, notably, Linux’s support of WinModems. While Linux demonstrates extreme reliability and user-based features, it fails to produce an out-of-the-box point-and-click Internet connection. If one believes the marketing of Microsoft, this would indicate that Linux fails to provide the most important function of an OS—surfing the Web. Two points: Surfing the Web has been sold to the public as the supposed functionality of the PC. And the only reason it’s difficult (that is, a few hours of driver research) to get the Linux system online is the proprietary nature of the WinModem. You might say there is a certain tension between the OSD of the Linux system and the proprietary line drawn down the center of the modem, half of which is hardware and half is Microsoft-written closed source software [4].

So, as Berners-Lee suggests, there is indeed a certain tension. The W3C recently proposed a new Patent Policy Framework that will allow certain new high-level software for Web access to more restrictive licensing than that of the open standards upon which the Web has been built. The W3C suggests that certain low levels must remain open: "Preservation of interoperability and global consensus on core Web infrastructure is of critical importance" [9]. But certain "higher-level services toward the application layer may have a higher tolerance" for more restrictive licensing [9]. These various levels at which technologies will be exposed to more restrictive forms of licensing will be determined by the W3C.

Obviously some member groups of the W3C, those supporting OSD, are not happy with this movement toward restrictive licensing. The O’Reilly group, a member of the W3C, has written a formal contestation to this new Patent Policy Framework. They write: "the W3C commits to keeping core standards royalty-free, but sets up the opportunity for "higher layer" standards to be chartered under RAND licensing … one reason the W3C exists is that the … Web was [once deemed] a higher-level application the distinction between high and low often proves meaningless and depends on the interests of those drawing the maps of the layers [3]."

The interests of those like Netscape who wish to draw a line above which the open source stops and the proprietary buck starts, may very well try to decide when and where the restrictive licensing begins. This may also be the line above which quality software ends. Again, there is a certain tension: What will win—quality software and creative energy or the corporate world and the almighty dollar?

Back to Top

Back to Top

    1. Berners-Lee, T. Architectural and Philosophical Points. 1998;

    2. Distributed Shared Memory (2001);

    3. Dougherty, D. O'Reilly Opposes W3C Patent Policy. 2000;

    4. Netscape. Unlimited Distribution. 1998;

    5. Peterson, L.L. and. Davie, B.S. Computer Networks: A Systems Approach. Morgan Kaufmann, San Francisco, CA, 2000.

    6. Peterson, R. Linux: The Complete Reference. McGraw-Hill, Berkeley, CA, 2000.

    7. Raymond, E. The cathedral and bazaar. 1997;

    8. Silberschatz, A., Galvin, P., and Gagne, G. Applied Operating System Concepts, New York, John Wiley, New York, 2000.

    9. Weitzner, D.J., Ed. W3C Patent Policy Framework. 2001;, 2001.

    10. Welsh, M., Dalheimer, M.K., and Kaufman, L. Running Linux. O'Reilly Publishing, Sepastopol, CA, 1999.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More