Robert L. Glass’s suggestion that domain-specific languages are superior to domain-independent languages is an interesting point ("One Giant Step Backward," Practical Programmer, May 2003). As a long-time programmer, I thus tend to agree with his assessment that we might actually have done some things better in the past.
Meanwhile, however, the industry has apparently replaced domain-specific languages with a two-pronged approach based on common, domain-independent languages and domain-specific protocols. Whether or not the switch was intentional is debatable. Regardless, we used to begin projects by looking at requirements, then selecting an appropriate language; today, we begin by assuming the language will be the one known by the most (or at least most influential) people in the group, no matter what the application requirements. The specifics of the application are instead captured in the selection and configuration of the protocols.
This approach requires less retraining time for programmers, thus increasing their productivity. It also spawned a second, unintended consequence—that we trade complexity in programming languages for complexity in protocols. The resulting explosion of protocols over the past decade has reached epic proportions.
Perhaps Glass’s point should be rephrased as "pick your poison." Is it better to have multiple, domain-specific languages with simple protocols or a few domain-independent languages with a plethora of complex domain-specific protocols? Personally, I vote for the former.
Ralph Castain
Fort Collins, CO
Robert L. Glass wished for "the development of newer, domain-focused replacements for the tired old specialty languages Fortran and Cobol." But Fortran has not, in fact, stood still all these years, undergoing revisions in Fortran90 and Fortran95; Fortran2000 has been defined and is up for a vote. Some computer scientists may also be surprised to learn there was even a revision in Fortran77. Fortran95 serves the scientific/engineering community quite well, with many modern features specially designed for its needs.
Recent surveys have found that most scientific computation is done in Fortran, including some of the largest calculations in the world on parallel supercomputers simultaneously utilizing as many as 4,000 CPUs. Fortran’s major competition is not other languages but computational environments, including Matlab and Mathematica. They are relatively slow, however, and Fortran is still important when high performance is critical.
There is a substantial gulf between the computer science community and the scientific community. The former hold Fortran in derision (often along with programming itself); the latter feel the concerns of computer science are largely irrelevant to scientific computing. The two thus go their separate ways.
Viktor Decyk
Los Angeles
Did PL/1 fail because it was too general, because IBM’s even-Suzie-the-stenographer-can-write-it campaign was outrageous, because of Computerworld calumnies, or because it entered the field too late? My vote is for too late. By the time PL/1 was usable (late 1960s) not only were the Cobol and Fortran practitioners set in their ways but the economics of conversion were overwhelmingly against it.
The problem with PL/1 was not that it was not as good as Cobol for what most Cobol programs did but was not enough of an improvement to justify the change. Legacy applications either had to be maintained by Cobol programmers or converted to PL/1. In the first case staffs would have had to be either bilingual or split between despised maintenance (Cobol) and exalted development (PL/1). Since Cobol was adequate and, in most cases, generated code quicker, the cost of switching could not be justified.
The case of PL/1 vs. Fortran was not as clear but not much different. Most Fortran programmers of my acquaintance considered programming a useful skill. They had learned to program well enough on DEC or CDC computers, on pre-360 IBM computers, or from instructors who had learned their programming on these machines. If they now had 360s for computing hardware, many of the library packages they used were not only written in Fortran but required they use the language’s FORMAT
statement to feed the package data. Once again, it was not that Fortran was better for this kind of application than PL/1 but that switching was not justified.
It is beside the point here but irresistible to note that the specifics of features introduced or modified by PL/1, like its string-handling functions, are still with us in awk
, Perl
, C#
, and elsewhere. Not so incidentally, those features were added later to Fortran and Cobol.
So which features of PL/1 were harmful to using it in place of the "domain-focused" languages of its time? The argument was that if you couldn’t use the kitchen sink—bit and character strings, dynamic allocation, events and multitasking, and initially weak preprocessor—you could ignore it.
Ben Schwartz
Andover, NJ
Robert L. Glass states no more than a fact when he described Fortran as "old" in the May issue. Whether it is "tired" is another matter altogether. Granted a new lease on life by the introduction of an array language and data abstraction (to name but two items) in Fortran90 and by features for parallel programming (based on "research devoted to [a] specific domain") in Fortran95, it is about to appear with full OO capabilities and enhanced numerical features in Fortran2000, expected to be approved this year. (For a sense of current use, I recommend a visit to the newsgroup comp.lang.fortran.) It may not appeal to computer scientists but remains a well-honed tool for the scientific and mathematical community.
Michael Metcalf
Berlin, Germany
Robert L. Glass’s May column was more nostalgic paean than realistic assessment of the evolution of programming languages. Languages have evolved as the requirements of the specific domains have evolved; today, there is much more domain-specific functionality than before. For example, Cobol was a great language for manipulating business records and was later tweaked to deal with online demands through its marriage to a transaction monitor. But modern business systems must also be adaptable, operate in real time, and include complex interfaces. Java and Microsoft’s Visual Basic in its .NET incarnation have grown to meet these needs and are today commonly used business languages. Moreover, data is usually decoupled from specific business processes; the SQL language was developed and refined to address this new "domain." Object extensions have extended this even further, with more-or-less satisfactory results.
C++ is the most commonly used language today for systems programming. It evolved from C, which was designed as a systems language for Unix, following the path of its descendants, as Glass pointed out.
As for report writing, we have a plethora of tools with language-like capabilities (as well as automation wizards), including Business Objects, OLAP products, and Crystal reports. Looking further, several complex environments with programmable capabilities, including those from PeopleSoft, SAP, and Siebel, are far more expressive of business concepts than Cobol.
Another aspect of domain functionality Glass did not address, but which is a feature of modern OO languages, is the ability to extend general-purpose languages to suit specific application domains via APIs and object libraries (such as messaging, security, data access, and mathematical analysis).
Fortran is the only specialized language Glass mentioned for which there is no better modern replacement. Neither Java nor C++ with suitable libraries, offers its ease of development and performance for numerically intensive applications.
Surveying the language landscape, I see domain-specific languages and features (driven by new requirements) continuing to evolve and flourish from the antecedents cited by Glass.
Rob Dublin
Croton-on-Hudson, NY
Author Responds:
I was surprised and pleased by the number of readers who took up my challenge regarding the value of the old-time, domain-specific programming languages. The insight they have added to my column enriches the dialogue on the subject immensely.
I feel the need to respond to only one comment. I often hear that today’s languages consist of a domain-independent programming language core, along with a capability for domain-specific extensions. There are at least two problems with this approach:
- Domain-specific extensions are rarely created. Most application programmers have no interest in, or time for, providing them; they are too busy solving domain-specific problems.
- Extensions are a clumsy way to address application problems. Some programmers have suggested using procedure calls to object libraries for these extensions. Would scientific programmers want to do procedure calls in order to do matrix arithmetic? Would information systems programmers do procedure calls to provide report-generation capabilities?
Just because computer science researchers think in domain-independent ways does not mean application programmers should have to do so, too.
Robert L. Glass
Bloomington, IN
Don’t Forget Experience in Role Definition
I realize that Phillip Armour’s point ("In the Zone: The Need for Flexible Roles," The Business of Software, May 2003) was that too much structure in roles is usually not a productive approach because it limits communication, participation, and perspectives in a project. His insight about "software as a knowledge medium and software development as a knowledge acquisition process . . ." is especially powerful. But his justification for flexible roles on the back of precise role definition bothered me by implying a perfect definition of a role. The linkage of "effectiveness of predefined roles" and irrelevance for discovery omits the value of a priori knowledge and experience in the processes in the requirements domain. Psychologist R.D. Laing explained it this way: "Experience is a body of knowledge that cannot be taught."
Bob Morrison
Los Altos, CA
Correct Me If I’m Wrong
Reading "Gramr CWOT?" (News Track, May 2003) made me think that linguists might not be the only ones to enjoy this rare springing to life of a new means of communication. Surely the fact that effective language can develop in very constrained environments is useful to anyone who communicates. In a sense, the underlying wit is on par with Cockney rhyming slang, now viewed as a British cultural heritage, though correct grammar is notably absent. CMIIW.
Frans Swaalf
Driebergen, The Netherlands
Article or Ad?
You are likely receiving hundreds of email messages concerning Gerry Miller’s ".NET vs. J2EE" (June 2003). How could Communications let a senior Microsoft officer cover the topic at the heart of the competition between his company and the companies investing in Java technology, with billions of dollars at stake. Such an article puts Communications in the league of PC and IT publications whose business models rely on companies paying for placing ads disguised as articles or user reviews. Don’t be surprised if your readers are asking questions like: "How much did Microsoft pay for placing this article?" You asked for it.
Martin Henz
Singapore
Gerry Miller’s article was one side of a debate in the same issue on competing architectures; please see the other side of the story in "J2EE vs. .NET" by Joseph Williams, p. 58.—Ed.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment