Opinion
Computing Applications

Forum

Posted
  1. The Folly of Laws Limiting Y2K Liability
  2. Regarding Glass
  3. Duly Noted
  4. Persuasive Technologies
  5. Linux Goals
  6. Correction

Quality, consistency, and simplicity are critical attributes in high-tech products. But they are largely lacking in today’s software. Change is sorely needed. Software vendors, however, will not change voluntarily. They do not believe it is in their best interests to do so. But such change is usually brought about by pain, and vendors are not experiencing any right now.

Still, vendors aren’t the enemy. It’s not about them and us. We’re all in this together. This industry, its successes, and its problems are more than just the vendors. Users and system professionals need to make some equally dramatic changes, too. The Y2K situation is merely a symptom of the underlying quality and management problems in the IT world. The so-called high-tech work force shortage is largely, but not entirely, a symptom of these problems. The status quo is probably not sustainable in the long run because of its large percentage of waste, such as the 25%-plus total failure rate for software projects. How quickly it changes is largely a function of how painful it gets due to the Y2K problem in the short run. The auto industry sought improvements in quality and management practices when the financial pain from higher-quality and lower-priced imports became too great. To what degree the Y2K problem becomes the catalyst for such changes in software (and generally other high-tech) industries remains to be seen. Powerful forces for the status quo in the high-tech vendor community, along with its lobbyists, are doing everything possible to minimize its Y2K pain through legislative limits on potential liabilities. If passed, these laws will probably backfire in terrible ways.

Ethical, moral, and legal questions of getting others to pay to clean up your mess aside, the sad thing is that this course of legislative action chosen by IT vendors will probably fail to achieve the desired effect of less overall financial responsibility for them. That’s because the unintended demotivating effects of the legislation will increase the total amount of Y2K computer-problem damages. So, although high-tech vendors may be legally responsible for a smaller percentage of total Y2K computer damages, the increased amount of damages probably nets out to no change in vendors’ bottom-line costs. But their customers will also have a larger share of more pain for which to pay.

That’s not the worst of it. To add insult to injury, by inviting the government into our industry, in the futile hope of mandating the Y2K computer problem magically solved, the vendor community has allowed the nose of the government into the industry’s tent. We will likely ever after have four groups to contend with, unless the original three can figure out how to self-manage this mess for the betterment of them all. If what happens instead is the big three of IT—vendors, customers, and techies—become further estranged because of these laws thanks to customer outrage at the irresponsible and beguiling behavior of vendors, the legislative repercussions could include licensing and malpractice insurance for programmers, government-mandated standards for high-tech products, and regulatory agencies policing IT operations. Impossible, you say? Ask doctors, auto manufacturers, and chemical companies what they think.

Is there any hope for a solution? Vendors could start acting more responsibly, like manufacturers in other industries and strive for higher quality and interoperability. Users and customers—but most importantly, user executives—could start acting more responsibly, like they do with other assets and pay more attention to IT decisions, projects, and practices. The language and culture gap between these two groups is great, and there is almost no history of direct interaction. New tricks will be difficult to learn, because the vendor tail has wagged the customer dog since the computer industry’s inception.

Techies represent the best hope of avoiding wasteful litigation and unnecessary legislative backlash due to the Y2K problem, aside from fixing all of the problems in time (something that cannot happen). They are also the best hope for helping the other two groups make the necessary changes in their behavior. But the techies have some serious changes to make in their own behavior. Historically, the techies have been the pawns of the vendors. Sure, the customers pay the techies’ salaries, but the vendors know what really motivates most of them—challenging new stuff to play with. Sure, the techies are supposed to make IT decisions in the best interests of their employers. But in most enterprises, there is little evidence they do so. Rather, in most cases, they make short-sighted decisions, without regard to the long-term life-cycle costs to their employers. Instead they create lots of work for themselves in a seemingly endless cycle of scrap and buy, scrap and buy. No doubt, this is a management problem. If executives would change, since they ultimately control the purse strings driving all of this, everything else would change too. But executives are largely intimidated by and distant from the technology, counting on their techies to take care of them. The Y2K situation is a "We let you down, big time" notice from the techies to their employers.

The best hope IT has of minimizing the intervention of regulators, legislators, and litigators is a willingness to change on the part of vendors and customers, and a proactive, customer-oriented, high-tech work force that brings vendors and customers together in meaningful dialog aimed at the continuous improvement of quality, consistency, simplicity, and profitability for all. I hope we are all up to the challenge.

Leon Kappelman
Oak Point, TX

Back to Top

Regarding Glass

I am very thankful that Robert Glass brought forward the issue of divisiveness (June 1999, p. 11). Progress comes from people who think against the grain. Without such nonconformist thinkers as Glass, we can’t advance in our computing professions.

It is sad that uttering an important word, anticipating the problem ahead, and putting process and activity in order are so sensitive and problematic.

I hope Glass continues his genuine work. The world is not divided immaculately into X and Y, and I hope one day Glass will be considered a unifier, a man of balance and harmony.

Ken Mandefrot
Toronto, Ontario

It’s ironic that Glass uses the Fortran/Algol conflict as an illustration, since today’s Fortran users are often ridiculed as hopelessly impractical Luddites by people on both sides of the theory/practice line. The problem with Glass’s style of criticism is that he takes the theory/practice distinction and applies it to people. In his example, by saying that the speaker, who presumably had an academic job, was breaking away from an "unhealthy tendency of [academic] computer science," Glass is at best damning with faint praise and at worst using a simple debating ploy. He sets up a straw man, then congratulates the speaker for helping him knock it down.

No one, academic or practitioner, gets anything done without using both theory and practice. To classify people as one or the other does not aid in communication. James Wilkinson, who was responsible for many practical advances in numerical linear algebra, also sought to prove the QR algorithm for tridiagonal matrices always converged using exact arithmetic—a very impractical result, since no one does exact arithmetic. My advice to Glass is to stop worrying about who’s academic and who’s practical and concentrate on how certain individuals, himself included, combine theory and practice to solve practical problems.

Chuck Crawford
Toronto, Canada

I see Glass as a bridge builder by nature. Relating the empirical nature of computing to the theoretical explorations of computation is an admirable contribution. I see him providing questions that ground what are otherwise pretty abstract and perhaps aimless inquiries. His discussion of the value of inspection (Apr. 1999, p. 17) is a great example. I think we could use more of that pragmatic sensibility and clarity that some of our most insightful theoretical thinkers have not been ashamed to apply.

Dennis E. Hamilton
Mountain View, CA

I read Communications primarily for the articles and columns Glass, and like-minded writers, contribute. There is far too much pressure for academics to publish, thus producing a great deal of drivel (and some good stuff, too).

Unfortunately, students don’t get enough practical background before entering the job market and consequently struggle for a while turning out poor code. Books like Glass’s Software Conflict don’t see the light of day often enough at universities.

We need a few good computer scientists and an army of software engineers. There is a place for both.

Please keep up the style and emphasis of Glass’s columns.

Orville E. Wheeler
Memphis, TN

There is an old joke about a new university CIO who outlines for his oversight committee all the things he has done since his move to that position and shows with analytic detail the positive results. The rejoinder from a committee member is "That’s fine in practice, but how is it in theory?"

I think Glass’s columns have been even-handed, but what tends to happen is that the theorists usually feel more threatened for several reasons:

  • When you note theory deficiencies, you are attacking ideas and principles, defendable to the death.
  • Practitioners are accustomed to criticism and change and less likely to be defensive.
  • Theorists usually think they hold the high ground and superior status (think of architects and builders).
  • Most teaching is inherently theory-centered; training is for those other folks, those who have corporate drudge jobs.

I have complained in the past about Glass’s somewhat abstract view when noting practitioners’ laziness or lack of perseverance. Still, his critiques have been honest. Glass’s column is still 90% of the reason I am an ACM member.

Albert L. LeDuc
Miami, FL

Back to Top

Duly Noted

Whitman, Townsend, and Aalberts (June 1999, p. 101) write that "the recent Supreme Court decision struck down the obscenity provisions of the Communications Decency Act." This is incorrect. The Court left the obscenity provision intact, specifically choosing to sever only the term "or indecent" from section 223(a)(1)(B)(ii), leaving the remainder of that section (dealing with obscenity) in force.

Max Hailperin
Saint Peter, MN

Authors Respond:
We apologize for the inaccuracy. As we noted in detail in our Jan. 1999 "Legally Speaking" column ("The CDA Is Not as Dead as You Think," p. 15), the obscenity provisions were retained and the indecency provisions were dropped.

Back to Top

Persuasive Technologies

In "Toward an Ethics of Persuasive Technology" (May 1999, p. 51), D. Berdichevsky and E. Neuenschwander use the term "blame" when writing about who gets blamed when computers make serious mistakes, rather than where it is appropriate to assign responsibility.

When contemplating where responsibility should be assigned, one should not forget leverage. Programmers have relatively little leverage when compared to management. Also, their view of the situation in which their work will be used is also frequently restricted. Sometimes the restriction is by choice, other times by managers, who wish programmers would concentrate more effectively. When a programmer demurs to a manager’s favorite proposal, the manager is always implicitly saying: "If you won’t implement my idea, I’ll find someone who will." The only thing the programmer accomplishes by refusal is walking papers. Why, then, was the responsibility of management totally left out of this analysis?

Charles Hixson
Oakland, CA

Authors Respond:
We do not leave the responsibility of managers out of this analysis. Rather, we address the issue of shared responsibility among programmers, managers, and other parties involved in the design of persuasive technologies by referring to these groups collectively as creators. In one instance (p. 56), we argue against the commonly held position that programmers are not responsible for what they are hired by other parties to produce. This is not to insinuate that these other parties—such as managers, contractors, and chief executive officers—themselves bear no responsibility. They bear much responsibility. Ethics should not be seen as a hunt for blame—what has been referred to as "the calculus of fault"—but as a search for guiding principles, applicable to managers and managed alike.

Back to Top

Linux Goals

Seeing the article "The Linux Edge," written by Linus Torvalds himself (Apr. 1999, p. 38), naturally I read it. Strangely enough, there was not one word about Unix in the article. It confused me, most likely because of a lack of information on my part, or insufficient reasoning powers (stupidity). Perhaps somebody may be able to explain where my reasoning goes wrong.

As I read the article, I determined, Linux has three major goals:

  • To be on the cutting edge.
  • The most exciting developments for Linux will happen in user space, not kernel space.
  • Put as little as possible in the kernel.

When I try to read about what the first goal means I find nothing that seems cutting edge. Mentioned are clustering, SMP, and embedded devices. These concepts have been available in other operating systems for a long time.

Moreover, it sounds like a microkernel architecture would be perfect for the other Linux goals, but the article explains why Linux is not a microkernel architecture. There are three reasons:

  • Microkernels were experimental (in the early 1990s).
  • Microkernels are obviously more complex (than monolithic kernels).
  • Microkernels execute notably slower.

My objection to the first reason is that microkernel work in the universities started in the 1980s (amoeba and mach, see www.cs.vu.nl/pub/amoeba and www.cs.cmu.edu/afs/cs.cmu.edu/project/mach/public/www/mach.html). Commercial systems were available before 1990 (Chorus and QNX, the latter at www.qnx.com; Chorus, bought by Sun, at www.sun. com.chorus).

My objection to the third reason is that large benchmarks (like AIM and SPEC, as opposed to microbenchmarking a single system call), show that Linux on a microkernel is 7% slower than Linux as a monolith (see os.inf. tu-dresden.de/L4/LinuxOnL4). A Web server is four times faster on a microkernel system, as opposed to running on a monolithic kernel (see www.pdos.lcs.mit.edu/exo). The speed increase is due to replacing kernel low-level disk access and TCP/IP buffer management with user-level code that knows the problem domain (Web serving) and optimizes accordingly. The ultra-secure concept of capability-based systems is deemed too slow to be practical on monolithic kernels. But putting a capability system on a microkernel can yield speeds slightly faster than Linux (see www.eros-os.org).

So, we are left with the second reason. And, as was often explained by the late Dan Hildebrand of QNX, it is more difficult to build a fast microkernel, than a fast monolithic kernel. But nowhere in Torvalds’ article did I find as the most important goal of Linux: to be simple to implement.

I think Linux is a very good implementation of Unix. But Unix is not cutting edge (after all, it is a design from 1970), and it would be welcome if Linux could be a start signal for catching up with more modern operating system designs. Developers and users would benefit if Linux incorporated results from the past 20 years.

Using a monolithic design was surely the quickest and easiest way to get a kernel up and running. Now this design will make the three goals mentioned earlier more difficult to obtain. Why does Torvalds not explain why this trade-off was made when introducing these goals? Or why it is not possible to switch to a microkernel now, when all the difficult work has already been done by others? Finally, it is nice to know that Linux is aiming toward being cutting edge. But why is there no (real) example of what that is?

I want to dispel the notion that I am a microkernel bigot. There are ways to make it easier for user-space programs to do things normally done in kernel space that do not involve microkernels. By giving users a true distributed file system with access to everything (files, networks, processes) through the file system Inferno (see www.lucent.com/inferno) and Plan9 (plan9.bell-labs.com/ plan9), monolithic kernels are extensible for the user. For instance, SPIN (www.cs.washington.edu:80/research/projects/spin/www) allows user code to be downloaded into the kernel, with security intact. Inferno and SPIN are cutting-edge systems (from after 1995), but Plan9 literature was available when Linux was designed.

Bengt Klenbergs
Stockholm, Sweden

Linus Torvalds Responds:
Let it stand that I dislike microkernels and don’t see the point of them. I could go into the whys, but that isn’t the point of this response.

Klenbergs believes that "being on the cutting edge" should somehow imply microkernels. I disagree. Microkernels are, in my admittedly not very humble opinion, an academic exercise in trying to make operating systems interesting research projects.

OSs should not be research projects. OSs should be so boring you take them for granted and don’t give them another thought. It is a sad fact that this is not the case now, but it’s silly to think you should try to make OSs exciting by coming up with new approaches to them.

OS technology is well known and stable, and the "cutting-edge" part is not in how the OS works, but in what it allows you to do. In the case of Linux, it is in the way it’s developed. Whether an OS is a microkernel or not is basically immaterial and a implementation question: the fact it allows you to do interesting work on interesting and affordable machines is what really makes a difference in the end.

Anybody who wants to work in basic OS research is obviously encouraged to do so. I come from a research environment myself, and in that sense I can only applaud people who do research in any area. However, thinking that research automatically makes sense in a production environment is naive at best.

My purely personal opinion is there are a lot more interesting areas. Instead of worrying how the OS is structured, you should worry about what you can do with it and in what directions you can expand it. I have really never claimed Linux itself to be academic research, and in fact the only article I ever wrote on it was not about how Linux works but about its portability issues, which were (to me) interesting.

But my personal lack of interest in OS research should not be construed as a backlash against research itself. My opinions are just that: opinions. I’m aware of the research, and I choose to ignore it.

By the way, the performance numbers quoted have little to do with real systems in production use. I would encourage Klenbergs to show some scientific critique of numbers generated in laboratory conditions vs. real-life behavior (hint: the Web server number in particular is in my opinion basically dishonest. It cuts down the problem domain, then optimizes for that cut-down version, instead of correctly handling the generic case).

So, when you think of the goals of Linux, think of the things it has allowed people to do. My hope is not that people will not discuss Linux as a interesting research topic, but as a vehicle for doing the things that really matter—not only research.

Back to Top

Correction

In Richard Heeks’ Column, "Software Strategies in Developing Countries," (June 1999, p. 15), strategic positions C and D in the figure were mistakenly reversed. Position C should have been located in the domestic market/packages cell; position D should have been located in the domestic market/services cell.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More