Opinion
Computing Applications Letters to the Editor

When Harm to Conference Reputation Is Self-Inflicted

Posted
  1. Introduction
  2. Don't Blame Modular Programming
  3. Konrad Zuse and Floating-Point Numbers
  4. Attack on Piracy vs. New Creation
  5. References
  6. Footnotes
Letters to the Editor

Citing conferences sponsored by the World Scientific and Engineering Academy and Society, Moshe Y. Vardi’s Editor’s Letter "Predatory Scholarly Publishing" (July 2012) reminded me of my own participation in the WSEAS flagship summer conference (the International Conference on Circuits, Systems, Communications and Computers) several years ago, contributing papers, tutorials, and even a plenary lecture, as an ad hoc replacement for a missing speaker. I recall WSEAS adopting a new policy saying papers would not be accepted for publication unless they included at least two references pointing to previous WSEAS proceedings or transactions. At first, I thought it odd that a scientific association would mandate self-citation to deliberately and artificially increase its citation impact but imagined it was simply common practice among conference organizers.

Visiting the Scholarly Open Access Web site (http://scholarlyoa.com) Vardi recommended, I realized that such a policy should indeed be viewed as harmful to an academic publisher’s credibility and reputation. I would therefore like to thank Vardi for pointing out such publisher behavior contrary to the interests of all scholarly publishing. It is particularly important for those of us whose conference travel is not sponsored by governments and other institutions.

Miroslav Skoric, Novi Sad, Serbia

Back to Top

Don’t Blame Modular Programming

In "Large-Scale Complex IT Systems" (July 2012) Ian Sommerville et al. reached unwarranted conclusions, blaming project failures on modular programming: "Current software engineering is simply not good enough." Moreover, they did so largely because they missed something about large-scale systems. Their term, "coalition," implies alliance and joint action that does not exist among real-world competitors. They said large-scale systems "coalitions" have different owners with possibly divergent interests (such as in the 2010 Flash Crash mentioned in the article) and then expect the software "coalition" used by the owners to work cooperatively and well, which makes no sense to me. Even if the owners, along with their best minds and sophisticated software, did cooperate to some extent, they would in fact be attempting to deal with some of the most difficult problems on earth (such as earning zillions of dollars in competitive global markets). Expecting software to solve these problems in economics makes no sense when even the most expert humans lack solutions.

Alex Simonelis, Montréal

Reading Ian Sommerville et al. (July 2012), I could not help but wonder whether new initiatives and institutions are really needed to study and create ultra/large-scale complex artificial systems. We should instead ponder how the behavior and consequences of such systems might be beyond our control and so should not exist in the first place. I am not referring to grand-challenge projects in science and engineering like space exploration and genomics with clear goals and benefits but the ill-conceived, arbitrary, self-interest-driven monstrosities that risk unpredictable behavior and harmful consequences. Wishful thinking, hubris, irresponsible tinkering, greed, and the quest for power drive them, so they should be seen not as a grand challenge but as a grand warning.

Why invent new, ultimately wasteful/destructive "interesting" problems when we could instead focus on the chronic "boring" deadly ones? From war, polluting transportation, and preventable disease to lack of clean water and air. These are real, not contrived, with unglamorous solutions that are infinitely more beneficial for all.

Todd Plessel, Raleigh, NC

Back to Top

Konrad Zuse and Floating-Point Numbers

In his news story "Lost and Found" (July 2012), Paul Hyman characterized Konrad Zuse’s Z9 as "the world’s first program-controlled binary relay calculator using floating-point arithmetic." This description is not correct but should indeed be the other way round; the Z9/M9 was the only one of Zuse’s computers to use binary-coded-decimal fixed-point arithmetic.

Zuse used binary floating point from the time of his earliest computer designs, because his own thorough analysis showed binary representation reduced the complexity of the arithmetic unit and that floating point is adequate for engineering calculations, which, as a civil engineer, is what he primarily had in mind.

Among the pioneers of early computing, from Babbage to Aiken to Wilkes, Zuse alone used floating-point arithmetic; his general-purpose computers Z1 (1938), Z3 (1941), Z4 (1945), Z5 (1953), and Z11 (1955) all used binary floating-point arithmetic. Beginning with the Z22 (1958), the computers developed by the Zuse Company used binary fixed-point arithmetic, implementing floating-point arithmetic through microprograms.

Zuse invented a format for binary floating-point numbers similar to that of IEEE 754, using it in his very first machine, the Z1; Donald Knuth attributes the invention of normalized floating-point numbers to Zuse. The Z3 included representations for 0 (zero) and ∞ (infinity). Operations involving these "exceptional" numbers were performed as usual, as in 0 – 0 = 0 and ∞ + 5 = ∞. Operations with an undefined result (such as 0/0, ∞ – ∞, and ∞/∞) were detected automatically, signaled by a special light before the machine stopped.

Zuse discussed the question of binary vs. decimal arithmetic with Howard Aiken, writing, "We also had some differences of opinion on technical matters. Aiken advocated the decimal system and had developed very beautiful codes for decimal numbers using several bits. I was more a proponent of pure binary representation—in any case, at least where large scientific computers were concerned. However, I had also used encoded decimal numbers in the mechanical solution for the punch card machine."1 The "punch card machine" was the Z9/M9.

Jürgen F.H. Winkler, Feldkirchen-Westerham, Germany

Back to Top

Attack on Piracy vs. New Creation

Notwithstanding the anti-copyright logo illustrating Joel Waldfogel’s Viewpoint "Digitization and Copyright: Some Recent Evidence from Music" (May 2012), legal attacks on unauthorized copying did not begin with Napster in 1999 but have been around since at least the first read/write media allowed copying, including audio and video cassettes in the 1980s and recordable CDs in the 1990s.

The earliest corporate legal resistance to illegal copying of music in the Napster era involved the MP3 format, not cassettes, reflecting the recording and distribution industry’s concern over the new technology-enabled ease of copying and sharing with no loss of quality. This was before any particular company, including Napster, had the ability to complicate the business situation by providing the peer-to-peer option to millions of everyday users. The target was individual sharers, not the media that made sharing possible. However, all such campaigns ultimately failed to prevent unauthorized copying of copyrighted work, even as some facilitating organizations were sued out of existence.

Napster also heralded more permissively licensed music. Creative Commons licensing followed, making it easier for artists, as well as rights holders, to adopt revenue models different from the traditional numbers-of-bound-copies as set by distributors. Though difficult to say which came first, there was a strong cross-influence (so at least some correlation) between zero-cost copying and permissively licensed creation of free-to-copy works. Waldfogel said that, given there is more illegal copying today than ever before, there should likewise be a decline in production, as a new musical work should earn less money when it can just be copied. However, the volume of new work has not declined.

So Waldfogel’s hypothesis (or my understanding of it) means creation is not inhibited, and some newer creation, distribution, and payment models are based on metrics other than traditional per-copy margin. It is not that Waldfogel’s metrics distort the legal foundation of copyright-protected music publishing and distribution but that the legal foundation has produced a market in which what the distributor is able to measure—reviews—is distorted by that foundation. Music critics are more likely to comment on a work produced and marketed by, say, Sony than on the equivalent work recorded and distributed independently by an individual musical artist directly.

Even though some well-known bands (notably Nine Inch Nails and Radiohead) have produced permissively licensed (and widely reviewed) albums, they follow Creative Commons licensing, not borrowing and redistributing permissively licensed music but creating new works covered by a permissive license to begin with. Most composers never attract much attention because they are not part of the established distribution and promotion ecosystem, even if their work reaches a potentially worldwide audience online, free of licensing barriers. Waldfogel’s metrics reflect the commercial and cultural effects of digital creation, rather than raw sales numbers.

Gunnar Wolf, Mexico City

Back to Top

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More