Opinion
Letters to the editor

To Change the World, Take a Chance

Posted
  1. Introduction
  2. What Deeper Implications for Offshoring?
  3. Authors' Response:
  4. Interpreting Data 100 Years On
  5. Author's Response:
  6. Objects Always! Well, Almost Always
  7. Footnotes
Letters to the Editor

Some of what Constantine Dovrolis said in the Point/Counterpoint “Future Internet Architecture: Clean-Slate Versus Evolutionary Research” (Sept. 2010) concerning an evolutionary approach to developing Internet architecture made sense, and, like Jennifer Rexford on the other side, I applaud and encourage the related “evolutionary” research. But I found his “pragmatic vision” argument neither pragmatic nor visionary. Worse was the impudence of the claim of “practicality.”

Mid-20th century mathematician Morris Kline said it best when referring to the history of mathematics: “The lesson of history is that our firmest convictions are not to be asserted dogmatically; in fact they should be most suspect; they mark not our conquests but our limitations and our bounds.”

For example, it took 2,000 years for geometry to move beyond the “pragmatism” of the parallel postulate, some 200 years for Einstein to overtake Newton, 1,400 years for Copernicus to see beyond Ptolemy, and 10,000 years for industrialization to supplant agriculture as the dominant economic activity. The Internet’s paltry 40–50-year history is negligible compared to these other clean-slate revolutions.

Though such revolutions generally fail, failure is often the wellspring of innovation. Honor and embrace it. Don’t chide it as “impractical.” The only practical thing to do with this or any other research agenda is to open-mindedly test our convictions and assumptions over and over…including any clean-slate options.

I worry about the blind spot in our culture, frequently choosing “practical effort” over bolder investment, to significantly change things. Who takes the 10,000-, 1,000-, or even 100-year view when setting a research agenda? Far too few. Though “newformers” fail more often than the “practical” among us, they are indeed the ones who change the world.

CJ Fearnley, Upper Darby, PA

Back to Top

What Deeper Implications for Offshoring?

As someone who has known offshoring for years, I was drawn to the article “How Offshoring Affects IT Workers” by Prasanna B. Tambe and Lorin M. Hitt (Oct. 2010) but disappointed to find a survey-type analysis that essentially confirmed less than what most of us in the field already know. For example, at least one reason higher-salaried workers are less likely to be offshored is they already appreciate the value of being able to bridge the skill and cultural gap created by employing offshore workers.

I was also disappointed by the article’s U.S.-centric view (implied at the top in the word “offshoring”). What about how offshoring affects IT workers in countries other than the U.S.? In my experience, they are likewise affected; for example, in India IT workers are in the midst of a dramatic cultural upheaval involving a high rate of turnover.

While seeking deeper insight into offshoring, I would like to ask someone to explain the implications of giving the keys to a mission-critical system to someone in another country not subject to U.S. law? Imagine if the relationships between countries would deteriorate, and the other country would seize critical information assets? We have pursued offshoring for years, but I have still not heard substantive answers to these questions.

Mark Wiman, Atlanta, GA

Back to Top

Authors’ Response:

With so little hard data on outsourcing, it is important to first confirm some of the many anecdotes now circulating. The main point of the article was that the vulnerability of occupations to offshoring can be captured by their skill sets and that the skills story is not the only narrative in the outsourcing debate.

The study was U.S.-centric by design. How offshoring affects IT workers in other countries is important, but the effects of offshoring on the U.S. IT labor market merits its own discussion.

Misappropriation of information has been studied in the broader outsourcing context; see, for example, Eric K. Clemons’s and Lorin M. Hitt’s “Poaching and the Misappropriation of Information” in the Journal of Management Information Systems 21, 2 (2004), 87–107.

Prasanna B. Tambe, New York, NY
Lorin M. Hitt, Philadelphia, PA

Back to Top

Interpreting Data 100 Years On

Looking to preserve data for a century or more involves two challenging orthogonal problems, one—how to preserve the bits—addressed by David S.H. Rosenthal in his article “Keeping Bits Safe: How Hard Can It Be?” (Nov. 2010). The other is how to read and interpret them 100 years on when everything might have changed—formats, protocols, architecture, storage system, operating system, and more. Consider the dramatic changes over just the past 20 years. There is also the challenge of how to design, build, and test complete systems, trying to anticipate how they will be used in 100 years. The common, expensive solution is to migrate all the data every time something changes while controlling costs by limiting the amount of data that must be preserved in light of deduplication, legal obsolescence, librarians, archivists, and other factors.

For more on data interpretation see:

  1. Lorie, R.A. A methodology and system for preserving digital data. In Proceedings of the Joint Conference on Digital Libraries (Portland, OR, July 2002), 312–319.
  2. Lorie, R.A. Long-term preservation of digital information. In Proceedings of the First ACM/IEEE-CS Joint Conference on Digital Libraries (Roanoke, VA, Jan. 2001), 346–352.
  3. Rothenberg, J. Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation. Council on Library & Information Resources, 1999.

Robin Williams, San Jose, CA

Back to Top

Author’s Response:

As Williams says, the topic of my article was not interpreting preserved bits. Jeff Rothenberg drew attention to the threat of format obsolescence in Scientific American (Jan. 1995), a focus that has dominated digital preservation ever since. Rothenberg was writing before the Gutenberg-like impact of the Web transformed digital formats from private, to applications, to publishing medium, leading him and others to greatly overestimate the obsolescence risk of current formats.

I am unable to identify a single format widely used in 1995 that has since become obsolete but would welcome an example. At the 2010 iPres conference (http://www.ifs.tuwien.ac.at/dp/ipres2010/) I asked the audience whether any of them had ever been forced to migrate the format of preserved content to maintain renderability, and no one had.

Format obsolescence is clearly a threat that must be considered, but compared to the technical and economic threats facing the bits we generate, it is insignificant and the resources devoted to it are disproportionate.

David S.H. Rosenthal, Palo Alto, CA

Back to Top

Objects Always! Well, Almost Always

Unlike Mordechai Ben-Ari’s Viewpoint “Objects Never, Well, Hardly Ever!” (Sept. 2010), for me learning OOP was exciting when I was an undergraduate almost 30 years ago. I realized that programming is really a modeling exercise and the best models reduce the communication gap between computer and customer. OOP provides more tools and techniques for building good models than any other programming paradigm.

Viewing OOP from a modeling perspective makes me question Ben-Ari’s choice of examples. Why would anyone expect the example of a car to be applicable to a real-time control system in a car? The same applies to the “interface” problem in supplying brake systems to two different customers. There would then be no need to change the “interface” to the internal control systems, contrary to Ben-Ari’s position.

Consider, too, quicksort as implemented in Ruby:

ins01.gif

This concise implementation shows quicksort’s intent beautifully. Can a nicer solution be developed in a non-OOP language? Perhaps, but only in a functional one. Also interesting is to compare this solution with those in 30+ other languages at http://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Quicksort, especially the Java versions. OO languages are not all created equal.

But is OOP dominant? I disagree with Ben-Ari’s assertion that “…the extensive use of languages that support OOP proves nothing.” Without OOP in our toolbox, our models would not be as beautiful as they could be. Consider again Ruby quicksort, with no obvious classes or inheritance, yet the objects themselves—arrays, iterators, and integers—are all class-based and have inheritance. Even if OOP is needed only occasionally, the fact that it is needed at all and subsumes other popular paradigms (such as structured programming) supports the idea that OOP is dominant.

I recognize how students taught flowcharts first (as I was) would have difficulty switching to an OO paradigm. But what if they were taught modeling first? Would OOP come more naturally, as it did for me? Moreover, do students encounter difficulties due to the choice of language in their first-year CS courses? I’m much more comfortable with Ruby than with Java and suspect it would be a better introductory CS language. As it did in the example, Ruby provides better support for the modeling process.

Henry Baragar, Toronto

I respect Mordechai Ben-Ari’s Viewpoint (Sept. 2010), agreeing there is neither a “most successful” way of structuring software nor even a “dominant” way. I also agree that research into success and failure would inform the argument. However, he seemed to have fallen into the same all-or-nothing trap that often permeates this debate. OO offers a higher level of encapsulation than non-OO languages and allows programmers to view software realistically from a domain-oriented perspective, as opposed to a solution/machine-oriented perspective.

The notion of higher levels of encapsulation has indeed permeated many aspects of programmer thinking; for example, mobile-device and Web-application-development frameworks leverage these ideas, and the core tenets of OO were envisioned to solve problems involving software development prevalent at that time.

Helping my students become competent, proficient software developers, I find the ones in my introductory class move more easily from OOP-centric view to procedural view than in the opposite direction, but both types of experience are necessary, along with others (such as scripting). So, for me, how to start them off and what to emphasize are important questions. I like objects-first, domain-realistic software models, moving as needed into the nitty-gritty (such as embedded network protocols and bus signals). Today’s OO languages may indeed reflect deficiencies, but returning to an environment with less encapsulation would mean throwing out the baby with the bathwater.

James B. Fenwick Jr., Boone, NC

The bells rang out as I read Mordechai Ben-Ari’s Viewpoint (Sept. 2010)—the rare, good kind, signaling I might be reading something of lasting importance. In particular, his example of an interpreter being “nicer” as a case/switch statement; some software is simply action-oriented and does not fit the object paradigm.

His secondary conclusion—that Eastern societies place greater emphasis on “balance” than their Western counterparts, to the detriment of the West—is equally important in software. Objects certainly have their place but should not be advocated to excess.

Alex Simonelis, Montreal

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More