Credit: Marcello Bortolino / iStockphoto.com
Revisiting the Great Objects Debate.
Ever read the book "The Art of Unix Programming" by Eric Raymond? Your arguments are basically the old concepts of Unix: make small components and use them together to fulfill something bigger.
I think this article raises important issues.
A good example of a large system I consider "object-oriented" is the Internet. It has billions of completely encapsulated objects (the computers themselves)and uses a pure messaging system of "requests not commands", etc.
By contrast, I have never considered that most systems which call themselves "object-oriented" are even close to my meaning when I originally coined the term.
So part of the problem here is a kind of "colonization" of an idea -- which got popular because it worked so well in the ARPA/PARC community -- by many people who didn't take the trouble to understand why it worked so well.
And, in a design oriented field such as ours, fads are all to easy to hatch. It takes considerable will to resist fads and stay focused on the real issues.
Combine this with the desire to also include old forms (like data structures, types, and procedural programming) and you've got an enormous confusing mess of conflicting design paradigms.
And, the 70s ideas that worked so well are not strong enough to deal with many of the problems of today. However, the core of what I now have to call "real oop" -- namely encapsulated modules all the way down with pure messaging -- still hangs in there strongly because it is nothing more than an abstract view of complex systems.
The key to safety lies in the encapsulation. The key to scalability lies in how messaging is actually done (e.g. maybe it is better to only receive messages via "postings of needs"). The key to abstraction and compactness lies in a felicitous combination of design and mathematics.
The key to resolving many of these issues lies in carrying out education in computing in a vastly different way than is done today.
Best wishes,
Alan
The author's difficult experiences with re-use of object-oriented code come more, I think, from poorly-designed systems than flaws in OOP.
My attempts at the re-use of other people's code have often been frustrating, regardless of the programming paradigms (OOP, structured programming, or just plain code).
Conversely, I have had success with OOP on small teams. We constructed our code and refactored redundant code; we leveraged inheritance to push common code into parent classes and keep specific code in derived classes. Is this code re-use? Yes, in a small sense. Is it the grand idea of re-use of code by anyone, on another (unrelated) system? Certainly not.
Object-oriented programming will not guarantee understandable, re-usable code. Neither did structured programming, flowcharts, or high-level compilers.
But...
It's the best we've got. (So far.)
Object-oriented programming lets us group (and split) our concepts. And as the good Mr. Kay observes, safety lies in encapsulation. OOP gives us that encapsulation.
Recent efforts in language design have given us dynamic languages and functional languages. These offer possibilities for programming. They build on OOP, just as OOP built on structured programming.
OOP may not be dominant, but it will be part of our future.
The core of OO lies elsewhere. OOP is about the tool. The name Simula-67, perhaps the Adam and Eve of OOP, gives a first clue. During his lecture in Leuven (B) in 1986, Jackson provided me a second clue: the world-of-interest is much more stable than the user requirements, software features or functionality. Note: Jackson was teaching on developing and programming administrative software in COBOL, not about OOP.
Jacksons example was about personnel administration where hiring, promoting people will remain a relatively constant in the domain. The report generating functionality and features, requested by personnel management and the laws, are likely to change a lot more frequently. Therefore, Jackson recommended modeling the world-of-interest first, including track-and-trace, and to implement the functionality and features required by the user second, where each feature interacts with this mirror image of the world-of-interest.
Allow me to add a third clue: when the world-of-interest is part of the real world, the pieces of your software that model parts of the real world inherit the consistency and coherence of the real world. Integrating such pieces of software is analogous to integrating road maps: they may have different conventions and include different aspects of the world-of-interest but they cannot conflict in the way policies, laws, rules, resource allocation decisions often do.
Therefore, true OO adopts the Unified Process with an additional constraint. At first, the use cases only serve to identify the relevant entities in the problem domain. They are to be forgotten while the developers create software that mirrors the problem domain. These developers must not rely on use case information to speed up, simplify this first effort. When a software model (or modeling facility) for the problem domain or world-of-interest is available, the use cases re-enter into the picture and the user needs are addressed. Thus, OO is about creating software artifacts whose validity and (re)usability solely depends on the presence stable counterparts in the real world.
Why is it so difficult to communicate this insight and message in the IT community? The answer is twofold. First, a lot of software developments (e.g. administrative applications) have a standardized world-of-interest that is so stable and omnipresent that the problem domain model has become implicit. Moreover, many of these applications have a world-of-interest that is artificial (and partially standardized by legacy). Without a community effort and common understanding, explicit problem domain modeling mirroring the real-world entities that are affected remains uninteresting and unfeasible for individual players.
Second, a lot of software developments cannot tolerate an explicit problem domain model in the final application (e.g. telecom, embedded systems where power consumption, execution speed are key concerns). They require the domain model to be compiled into the final code.
In view of IT being a young domain suffering from a shortage of talented developers and the affinity of IT professionals with the above two classes of software, the full contribution of OO remains largely untapped. However, if IT needs to penetrate application domains where the penalty of imposing an IT-centric problem domain model (cf. first) is prohibitive or where a compiled problem domain model is an unsolved issue, the OO approach as pioneered by Jackson represents the answer for which there are few alternatives. And these domains are important to society: traffic, production, logistic, energy, health
Therefore, teaching OOP from the start is not sufficient but if there are no compelling reasons to do otherwise, it may prepare the grounds for the right kind of OOD. If the Jackson approach presented here is equally well disseminated without OOP, then the issue remains open.
A reader has brought to my attention Sornen Lauesen's article: "Real-Life Object-Oriented Systems" IEEE Software March/April 1998, 76-83. (For those without access to the "competition", a preliminary version appears at http://www.itu.dk/~slauesen/Papers/Oo-real.pdf.) Lausesen's central finding is that in _real_ OO applications, especially in business, most objects tend to be "degenerate", that is, they are just data structures or libraries of procedures. This is consisent with Alan Kay's complaint that OO is not being used as originally conceived where objects do significant computation in response to receiving a message.
In the following issue of IEEE Software I found the article "Does OO Sync with How We Think?" by Les Hatton. Along with an empirical study (bugs in an OO program in C++ take much longer to fix than bugs in a similar non-OO C program), Hatton discusses the claim that thinking in terms of OO is natural, an issue I raised in conjunction with the research by Hadar and Leron. While Hatton finds that encapsulation _partially_ fits the way we think, he claims that this is not at all true with the other central concepts of OO -- inheritance and polymorphism. His conclusion: "But OO is not naturally and self-evidently associated with the least error-prone way of reasoning about the world and should not be considered a primary candidate for a more effective programming paradigm".
These papers describe empirical studies that support my views. What bothers me most are that proponents of OO cannot point to _empirical studies_ supporting their claims for the superiority of OO.
The following letter was published in the Letters to the Editor in the January 2011 CACM (http://cacm.acm.org/magazines/2011/1/103186).
--CACM Administrator
Unlike Mordechai Ben-Ari's Viewpoint "Objects Never, Well, Hardly Ever!" (Sept. 2010), for me learning OOP was exciting when I was an undergraduate almost 30 years ago. I realized that programming is really a modeling exercise and the best models reduce the communication gap between computer and customer. OOP provides more tools and techniques for building good models than any other programming paradigm.
Viewing OOP from a modeling perspective makes me question Ben-Ari's choice of examples. Why would anyone expect the example of a car to be applicable to a real-time control system in a car? The same applies to the "interface" problem in supplying brake systems to two different customers. There would then be no need to change the "interface" to the internal control systems, contrary to Ben-Ari's position.
Consider, too, quicksort as implemented in Ruby:
def quicksort(v)
return v if v.nil? or
v.length <= 1
less, more = v[1..-1].
partition { |i| i < v[0] }
quicksort(less) + [v[0]] +
quicksort(more)
end
This concise implementation shows quicksort's intent beautifully. Can a nicer solution be developed in a non-OOP language? Perhaps, but only in a functional one. Also interesting is to compare this solution with those in 30+ other languages at http://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Quicksort, especially the Java versions. OO languages are not all created equal.
But is OOP dominant? I disagree with Ben-Ari's assertion that "...the extensive use of languages that support OOP proves nothing." Without OOP in our toolbox, our models would not be as beautiful as they could be. Consider again Ruby quicksort, with no obvious classes or inheritance, yet the objects themselves arrays, iterators, and integers are all class-based and have inheritance. Even if OOP is needed only occasionally, the fact that it is needed at all and subsumes other popular paradigms (such as structured programming) supports the idea that OOP is dominant.
I recognize how students taught flowcharts first (as I was) would have difficulty switching to an OO paradigm. But what if they were taught modeling first? Would OOP come more naturally, as it did for me? Moreover, do students encounter difficulties due to the choice of language in their first-year CS courses? I'm much more comfortable with Ruby than with Java and suspect it would be a better introductory CS language. As it did in the example, Ruby provides better support for the modeling process.
Henry Baragar
Toronto
The following letter was published in the Letters to the Editor in the January 2011 CACM (http://cacm.acm.org/magazines/2011/1/103186).
--CACM Administrator
I respect Mordechai Ben-Ari's Viewpoint (Sept. 2010), agreeing there is neither a "most successful" way of structuring software nor even a "dominant" way. I also agree that research into success and failure would inform the argument. However, he seemed to have fallen into the same all-or-nothing trap that often permeates this debate. OO offers a higher level of encapsulation than non-OO languages and allows programmers to view software realistically from a domain-oriented perspective, as opposed to a solution/machine-oriented perspective.
The notion of higher levels of encapsulation has indeed permeated many aspects of programmer thinking; for example, mobile-device and Web-application-development frameworks leverage these ideas, and the core tenets of OO were envisioned to solve problems involving software development prevalent at that time.
Helping my students become competent, proficient software developers, I find the ones in my introductory class move more easily from OOP-centric view to procedural view than in the opposite direction, but both types of experience are necessary, along with others (such as scripting). So, for me, how to start them off and what to emphasize are important questions. I like objects-first, domain-realistic software models, moving as needed into the nitty-gritty (such as embedded network protocols and bus signals). Today's OO languages may indeed reflect deficiencies, but returning to an environment with less encapsulation would mean throwing out the baby with the bathwater.
James B. Fenwick Jr.
Boone, NC
The following letter was published in the Letters to the Editor in the January 2011 CACM (http://cacm.acm.org/magazines/2011/1/103186).
--CACM Administrator
The bells rang out as I read Mordechai Ben-Ari's Viewpoint (Sept. 2010) the rare, good kind, signaling I might be reading something of lasting importance. In particular, his example of an interpreter being "nicer" as a case/switch statement; some software is simply action-oriented and does not fit the object paradigm.
His secondary conclusion that Eastern societies place greater emphasis on "balance" than their Western counterparts, to the detriment of the West is equally important in software. Objects certainly have their place but should not be advocated to excess.
Alex Simonelis
Montreal
The following letter was published as a Letter to the Editor in the November 2010 CACM (http://cacm.acm.org/magazines/2010/11/100636).
--CACM Administrator
Though I agree with Mordechai Ben-Ari's Viewpoint "Objects Never? Well, Hardly Ever!" (Sept. 2010) saying that students should be introduced to procedural programming before object-oriented programming, dismissing OOP could mean throwing out the baby with the bathwater.
OOP was still in the depths of the research labs when I was earning my college degrees. I was not exposed to it for the first few years of my career, but it intrigued me, so I began to learn it on my own. The adjustment from procedural programming to OOP wasn't just a matter of learning a few new language constructs. It required a new way of thinking about problems and their solutions.
That learning process has continued. The opportunity to learn elegant new techniques for solving difficult problems is precisely why I love the field. But OOP is not the perfect solution, just one tool in the software engineer's toolbox. If it were the only tool, we would run the risk of repeating psychologist Abraham Maslow's warning that if the only tool you have is a hammer, every problem tends to look like a nail.
Learning any new software technique procedural programming, OOP, or simply what's next takes time, patience, and missteps. I have made plenty myself learning OOP, as well as other technologies, and continue to learn from and improve because of them.
For his next sabbatical, Ben-Ari might consider stepping back into the industrial world for a year or two. We've learned a great deal about OOP since he left for academia 15 years ago.
Jim Humelsine
Neptune, NJ
Displaying all 9 comments