I was quite surprised and disappointed by Michael Noll’s "Technical Opinion" ("Does Data Traffic Exceed Voice Traffic," June 1999, p. 121). The very premise that "today’s megamergers may be tomorrow’s megalosses" because voice traffic will continue to exceed data traffic is severely faulted, and the evidence is in error by Noll’s many omissions.
Although Noll references digitized voice traffic in a number of places, he misses the fact that on today’s networks—public switched telephone networks (PSTNs) included—voice traffic is data traffic. I suspect the argument Noll was really trying to make is one that asynchronous transfer mode (ATM) proponents have been making for years: Virtual circuits are necessary for carrying multimedia traffic, and packet-based networks are insufficient.
The ever-increasing availability of bandwidth (via cable modem, satellite, and DSL) go a long way toward improving multimedia support. The deployment of quality of service (QoS) support for IP (such as RSVP and DiffServ) will make the biggest difference in the long run. Although these are new technologies, currently used only on private networks, their spread will increase as more QoS-enabled IP products are released, including Microsoft’s Windows 2000. Once available, the differences between virtual circuits and IP-based networks will all but disappear, further reducing the difference between voice and data.
Noll implies increased traffic volume is an indication of success by his premise that voice traffic will continue to exceed data traffic (assuming there is a difference). I would submit that a reduction in traffic with an increase in users would be a better metric. IP multicast, for example, provides the most resource-efficient way to deliver data of any kind—including voice and video streams—simultaneously to a large and possibly widely dispersed (potentially global) audience.
Although we don’t yet have a ubiquitous multicast-enabled Internet, the number of multicast-enabled ISPs and compelling multicast content is growing at an ever-increasing rate. Multicast is a natural compliment of radio, coaxial cable, and satellite systems, as well as of all network media, including ATM. You can bet that national broadcasters are looking to it as part of their next step into the digital age.
Noll focuses on 64Kbps voice traffic (128Kbps total for two-way), only mentioning 4Mbps video traffic (one-way). He briefly mentions compression and neglects the wide variety of encoding techniques available that can widely vary the bandwidth requirements and characteristics of audio and video. I suggest Noll take a look at the thousands of radio stations currently providing stereo audio over the Internet via 28.8Kbps modems, as just one of many examples unavailable from today’s PSTNs.
Noll also neglects to consider the rich diversity of applications possible when various multimedia and data types are mixed, possibly in two-way conversations, including audio, video, text chat, and white-board for multimedia conferences. He misses the point of convergence, which is to enable much more than email, file transfer, Web-browsing, and even Internet telephony. The point is to remove restrictions so new and different applications are possible or enable old applications via a new transport. Although the signals from Internet radio stations may not have the high-fidelity of a quality FM signal, they can now reach a global niche audience and be heard within buildings that radio waves cannot otherwise penetrate, not to mention they can accompany other information, such as song titles, artist names, lyrics, graphics, and links to Web pages.
The battle between those touting circuits vs. those preferring packets is over. It’s pointless. There’s no reason one must be exclusive of the other. They are compliments, and the focus today is on using them together to promote a network that will support new, rich, and diverse applications. That is why the megamergers have occurred, not because analysts predict PSTN will go away.
It doesn’t make sense to try to distinguish voice traffic any longer; it’s just another data type. Why have a network dedicated to it exclusively? Once in digital form, voice traffic can accompany any other type of data the available bandwidth allows. And when used with IP, it can traverse virtually any network media worldwide. ATM has not been a success for the desktop because it is complicated and expensive compared to Ethernet, but ATM has proved its value in Internet backbones and as the foundation of DSL technologies that provide fast, "always-on, always-connected" Internet access for homes and small offices.
Trying to foment an us-and-them attitude does a disservice. We should be working on the challenges of creating the rich, ubiquitous, secure, and easy-to-use applications, rather than arguing about whose infrastructure is better.
The ideal Net is transparent. Anything else is broken.
Bob Quinn
Weston, MA
Noll treats data compression as an afterthought; he mentions it only in the last two paragraphs of his sidebar (p. 123). The omission is fatal to his analysis.
Noll’s count of the bit rate of telephone speech (128Kbps) is exaggerated by more than a factor of 15. During a voice call, it is rare for both parties to talk simultaneously, and periods of complete silence on the order of hundreds of milliseconds are common. Today, toll-quality voice compression can provide the perception of a full-duplex call with a bit rate of about 8Kbps.
Noll then compares this exaggerated (and nonempirical) speech bit rate with a Web-surfing and file-downloading bit rate that is already highly compressed. JPEG, GIF, MPEG, MP3, PDF, and ZIP formats all provide compression, and all are ubiquitous on the Web. About the only Web traffic not compressed is the actual text displayed on a Web page, and is a tiny fraction of total bandwidth. Noll’s claim that "the compression ratios for speech and for data are about the same, and my conclusions are unaffected" is false, since the information density of uncompressed voice data is far lower than that of compressed Web data.
The biggest flaw in Noll’s analysis is that he looks backward instead of anticipating improved technology. The bit rate for voice telephony can only go down as voice compression algorithms improve. Meanwhile, the number of Web users and Web pages continues to skyrocket even as Web content (including advertising) becomes more realistic and exponentially more demanding of bandwidth. If the per-bit cost of the backbone network improves during the next 10 years as much as it has during the past 10, Internet-delivered full-motion video in 2009 will be not only cost-effective but essentially free.
Daniel Dulitz
Port Matilda, PA
In a study regarding the question Noll asks: "Does data traffic exceed voice traffic?," Kerry Coffman and I concluded in an article "The Size and Growth Rate of the Internet" (see www.firstmonday.dk/) that while the bandwidth of data networks was about as large as that of the voice network at the end of 1997 (and is somewhat larger now), the amount of data traffic, as measured in bytes, will exceed that of voice traffic (where we assumed, as Noll did, that a voice conversation involves two 64Kbps streams of bits in each direction) around the year 2002.
There are several sources of underestimates in Noll’s "Technical Opinion." For example, he assumes that the average email message is about 1.2KB. However, the spam messages I receive (I’ve collected almost 2,000 spam messages for the purposes of doing various statistical analyses) average 4KB. The approximately 300 non-spam messages waiting in my mailbox (and from which both spam and mailing list messages have been filtered out) average about 10KB. This excludes all messages with Word or PowerPoint attachments, which are often as large as a megabyte or more.
Online users with modem service to ISPs appear to average about 2MB of downloaded material per day. However, their traffic is dwarfed by people in corporations and other institutions with broadband access. For example, Princeton University has about 4,500 undergraduates, so, say, perhaps 8,000 people have connections to the Internet in one capacity or another. The average amount of material they download from the Internet is about 7.5MB per day per person.
Noll also appears to overestimate the amount of voice traffic. FCC figures say that total daily usage (local and long distance) of phone service per person (including both modem and fax calls) amounts to about 30 minutes per day.
Andrew Odlyzko
Florham Park, NJ
It’s agreed that, contrary to many claims, voice traffic still exceeds data traffic, and that many reports about astronomical growth rates involving the Internet are exaggerated.
The main problem with Noll’s estimates is he didn’t collect any measurements. Rather, he used "back-of-the-envelope" calculations based on asking his University of Southern California students about their activities to approximate an estimate of 16.2Mbps of data traffic received per person per day.
My correspondence to colleagues at USC revealed that statistics for the university’s Internet traffic are available online (foo.usc.edu/netstats). These statistics, confirmed by USC’s network administrators, show that during the school year, average Internet traffic to USC is about 20Mbps. USC has about 28,000 students, so let’s say perhaps 35,000 people all told use data communications. Average traffic of 20Mbps, divided by 35,000 users, yields 49Mbps of data traffic received per person per day, or three times as much as the average Noll computes for his USC group.
While Noll’s estimates for data traffic appear low, his estimates of voice traffic appear high. He reports that on an average day, he "participate[s] in well over an hour of long-distance calls." However, FCC statistics (www.fcc.gov), show that on average people in the U.S. (adults and children) participate in about 10 minutes of long-distance calls per day. Although there are no solid numbers for long-distance voice calls at USC, it seems safe to assume, based on the FCC statistics, they do not carry much more traffic than USC’s Internet connection.
Probably the main conclusion to be made from Noll’s work is that he and his students are atypical in making greater use of voice telephony and less use of data communications than his colleagues. USC is not typical of the entire U.S. population, which is why the conclusion that data traffic for the nation as a whole still lags behind voice traffic is valid. However, data traffic is growing much more rapidly, and, even in the absence of real-time video on the Internet, is likely to overtake voice traffic in a couple of years.
Kerry Coffman
Florham Park, NJ
Michael Noll Responds:
Quinn is clearly disturbed by my column. But he presents no data to refute my claim that voice exceeds data. All he does is state that voice is data. Yes, in the digital world, everything becomes bits—much like in the analog world, everything becomes Hertz.
Quinn goes on to make statements I have come to expect from the advocates of multimedia convergence. This is the kind of exaggeration I have sought to refute in my "Technical Opinion" and in other articles (see www.citi.columbia.edu/amnoll/ for a complete list).
Dulitz makes the point that telephone speech can be compressed. I indeed treated this possibility in my column, stating that ASCII text could also be compressed by about the same factor, and hence my overall conclusion would not be changed. This seems unacceptable to him.
The very best analog compression that we had 20 years ago was by a factor of about 20 to 1—and we are not yet even close to that with today’s sophisticated digital technology. But even if such a high compression ratio were achieved, voice would still dominate over data.
Dulitz then falls into the hype that so characterize much of what is said about the Internet. I really doubt that full-motion video will be "essentially free," as Dulitz claims, in 2009. These are the kind of statements that create confusion and lead businesses into megaflops. If video at a bit rate on the order of 1Mbps were free, as Dulitz claims, then so too would all transmission over distance. I somehow doubt that the long-distance industry—currently at about $100 billion a year—will disappear by 2009.
Odlyzko and Coffman, to their credit, avoid hype and present data to refute my claims. They say I underestimated email traffic. But email is such an efficient means of telecommunication that even if I were wrong by a factor of 100, the amount of email traffic would be considerably less than voice traffic.
Odlyzko claims that 2MB per day is a good figure for downloading from ISPs. But this is precisely the figure I used in my estimates, so here we agree. But he then suggests a figure of 7.5MB per day per person based on usage data from Princeton University. This figure seems extremely high to me, and I wonder how it was obtained and what people at Princeton are downloading to create such traffic. Could it be that voice traffic is also included?
Odlyzko uses FCC data that states average daily voice usage of 30 minutes per day. One problem is that this FCC data is outdated. Another issue is the FCC data is a national average and thus, to be fair, would need a comparable figure for average national usage of data.
Coffman claims the average Internet traffic at USC is 20Mbps, from which he then concludes that the average student at USC is responsible for 49Mb of data traffic per day. This amount of data is astronomical. One problem with his methodology is that USC is a major node on the Internet and thus carries data traffic from other sources. This means the 20Mbps figure is too high. Moreover, the traffic graph for the USC connection to the Internet shows a large amount of steady traffic at all hours, making me suspicious that all sorts of Internet tests are being conducted continuously. Accordingly, this figure of 20Mbps needs to be reduced to represent USC data traffic generated by real users. If the figure is reduced, perhaps by a factor of 4 or so, it is then comparable to what I had already concluded in my informal survey.
Coffman compares his USC data traffic with a national average for voice traffic issued by the FCC. It is a fuzzy methodology to compare USC data traffic with an obviously too low figure for voice. USC data traffic must be compared to USC voice traffic.
To determine a figure for USC voice traffic, I spoke to USC’s telecommunications manager. The combined number of outside lines connecting USC to telephone suppliers is about 1,500. Since each line carries 128Kbps, these lines can carry a maximum of about 200Mbps, with an average of perhaps one quarter of the peak. This would mean the USC voice traffic is 10 times that of data traffic, not including voice traffic within the USC campus.
It is important to remind the reader that my data was based on an informal survey of students in two different classes. My objective was to challenge those who make exaggerated claims about data traffic to present some real data.
Considerable investments in the future are being made by corporations. For example, AT&T is paying handsomely to purchase two cable television companies—TCI and MediaOne—partially in pursuit of the belief the Internet is this way of the future. It is this kind of hazy thinking and the lack of real data that justifies my belief that future megaflops will come from today’s megamergers.
Buying New PCs
I found Gerald Post’s column "How Often Should a Firm Buy New PCs?" ("Personal Computing," May 1999, p. 17) thorough, informative, and aggravating. I’ve been involved in my share of volume PC purchases, and I’ve learned that detailed cost justification is the road to ruin. Never have I seen the process avoid pitfalls that typically look downright stupid—once you’re armed with the clarity of hindsight.
I am not knocking Post’s conclusion; it is good advice. My point is that too often, the accounting types arbitrarily dismiss features or try to squeeze another $25 or $50 per PC on a purchase, and we’ve all seen the results. First comes the a priori advice: "You’ll probably never need more than 16MB of RAM on your PC"; "Why do we need sound cards we don’t use sound cards now?"; "If the difference in modems is $300, how you can justify that difference?"; "We don’t do enough with graphics to justify graphics cards"; "You’ll be retired before you use up a ‘meg.’" Real-world examples; real scary.
Then come the complaints: "How could we possibly want upgrades already?"; "How are we going to get new computers when we haven’t depreciated the ones we’ve got?"; and my favorite, "Why didn’t you think of that when we bought the computers."
Post provides some justification for what we generally know to be true. Let me propose a rule of thumb I discovered a couple of years ago. It has never failed me, and I think you’ll find it explains Post’s data quite nicely. In fact, most people who’ve pondered this rule find it maddeningly simple yet effective.
Months of satisfaction = PC system sale price rounded on the 45 rule to the nearest $100 x .01.
This gives you the number of months a computer will be owned before its user has an overwhelming urge to replace it. Sure, you can get more utility out of a PC, or you may crave a new board or chip a month after you buy a new machine. However, if you think about how long you were happy with the computers you owned, you’d agree this rule is eerily on the mark. It’s even useful for managers as a planning and budget tool.
For instance, if a new PC system costs $1,200, then in 12 months you’ll want to replace it. If a computer system costs $3,495, in 35 months you’ll want to replace it.
This does not include printers, scanners, and other peripherals that cannot be practically integrated into the computer, but does include Zip drives, DVD drives, and so forth. If you’re buying laptops, use .0067, rather than .01. The rule applies only to single-user, general-purpose equipment.
The rule is like a bad show-tune. It ain’t pretty, but it’s catchy, and you can’t get your mind off it.
James E. Harvey
Alexandria, VA
ACM’S Copyright Policy
I just finished reading about the changes in ACM’s copyright policy (www.acm.org/pubs). What a refreshing change to see a publisher pushing to incorporate the possibilities and challenges of Internet access into its copyright policy.
As a frequent user of ACM material, access via the digital library to reports and articles has been invaluable. I’m glad to see the copyright policy appears to encourage and protect the interests of authors in both print and online. I depend on them to keep me up to date and inform me of new concepts in my field.
Misha Vaughan
Redwood Shores, CA
Addendum
In our article "Testing and Evaluating Computer Intrusion Detection Systems" (July 1999, p. 53), we neglected to mention by name two intrusion-detection systems that performed very well in our tests. The University of California, Santa Barbara, provided two systems—NetSTAT and USTAT—to detect network-based and host-based intrusions, respectively. Information on these systems can be found at www.cs.ucsb.edu/~kemm/NetSTAT.
The other systems tested were Stanford Research Institute’s Emerald Project, and the U.S. Air Force’s Automated Security Incident Monitor.
Robert Durst
Bedford, MA
Terrence Champion
Eric Miller
Luigi Spagnuolo
Brian Witten
Hanscom AFB, MA
Join the Discussion (0)
Become a Member or Sign In to Post a Comment