Opinion
Computing Applications Forum

Forum

Posted
  1. Noncompetitive Net Screams for Controls
  2. Even a Good Abstraction Needs Experience and Testing, Too
  3. Include Citations When Ranking Institutions and Scholars
  4. Author

Two points seemed to be missing from Abbe Mowshowitz’s and Nanda Kumar’s "Viewpoint" "Public vs. Private Interest on the Internet" (July 2007). For one, while they quoted Edward Whitacre’s (in)famous comment about Google, Vonage, and other content providers not paying AT&T for access, they failed to point out the fallacy of Whitacre’s argument, something that, sadly, seems to have escaped nearly everyone else, too, namely, that we all pay for our use of the Internet by paying for our own connections. I pay my ISP for access to the Internet, and content providers pay for their access to the Internet. Accordingly, my ISP should not be allowed to demand fees from anyone else in the course of providing my access, and neither should AT&T be allowed to charge content providers accessed by AT&T’s subscribers.

The other oversight, though less glaring, is the effect of the state of competition. Net neutrality would be far less contentious if there were a healthy, competitive environment for Internet access, with a large base of established ISPs everyone could choose from and entry costs for new ISPs that made it possible for new ISPs to compete against their more established counterparts. In such an environment, companies that are too restrictive would lose business. However, the necessity of running lines to every home (or putting antennas in every neighborhood) means the cost of staying competitive is extremely high.

Most people have only two options when it comes to high-speed Internet access: the phone company and the cable company. In rural areas, they may have only one (the phone company) or none at all. Latency and other issues involving satellite providers inhibit them from providing competition. While I’m generally in favor of minimal governmental interference, this noncompetitive environment screams for controls to ensure fair access and prevent the kind of abuse endemic with monopolies and duopolies.

Having lived in an SBC Communications (now AT&T Inc.) service area, I’ve been exposed to invalid arguments (in ads) and anti-consumer action. I am frustrated that nobody has pointed out the fallacy of the arguments. That fallacy is because much of the cost of providing local service (such as running lines) is subsidized by long-distance revenue through the Universal Service Fund (a tax) and rates established by government regulators to ensure AT&T a profit. SBC/AT&T, which did not have to "pay its way" when building its network (subsidized instead by the public dole), turned colors when asked to provide fair access to the fruits of the public’s generosity by allowing competitive local exchange carriers (CLECs) access to its wires. Now that AT&T has run those wires and has adequate revenue to maintain and upgrade them, it suddenly seems to have decided the CLECs should run their own lines and that private investors should fund the costs in full.

The AT&T experience seems to reflect success promoting legislative changes designed to destroy competition and lock in profit—often arguing against the very things that were critical to its own success. Its efforts regarding Net neutrality represent more of the same.

PS: I work for a manufacturing company in the IT industry, not in telecommunications, though I own stock in several telecom companies. AT&T happens to be the largest of my telecom holdings, but that doesn’t stop me from being critical of its actions (even though these actions probably benefit me financially).

Rob Stitt
Summit, MO

Authors’ Response:

Stitt is correct in observing that everyone pays for access to the Internet. While content providers pay substantial fees to telecoms for network access, the critical issue is whether the telecoms have any business trying to discriminate downstream traffic, thus enabling them to charge additional fees for various levels of service. The main argument for Net neutrality is based on the principle that ownership of a resource important to the public does not confer the right to dictate how that resource is to be used.

A healthier competitive environment for Internet access might indeed make it more difficult for any one actor to dictate terms in the marketplace. If history is any guide, however, competition is not likely to increase sufficiently to protect the public interest in the absence of government regulation.

Abbe Mowshowitz
Nanda Kumar
New York

Back to Top

Even a Good Abstraction Needs Experience and Testing, Too

I must take issue with David Lorge Parnas’s "Forum" comment "Use the Simplest Model, But Not Too Simple" (June 2007). Any model of computation that allows unlimited storage is, accordingly, a lie. Thus, Parnas swept away such fundamental models as Turing machines and such concepts as undecidability and NP-completeness. Even the statement "quicksort takes O(n log n) time" is meaningful only for models that allow unlimited storage.

A good abstraction captures the details that matter for a specific purpose. Recall that Alan Kay carried around a cardboard model of his proposed Dynabook—a precursor of the modern laptop—in order to verify that its size and weight were tolerable. Turing machines and cardboard models have little in common—one has unlimited storage, the other has none at all—though both are useful abstractions.

No abstraction tells the whole story. O-notation does not predict actual runtime, and cardboard models don’t overheat. A professional needs to use abstractions in conjunction with other knowledge sources (such as experience and testing) to produce a sound design.

Lawrence C. Paulson
Cambridge, England

Back to Top

Include Citations When Ranking Institutions and Scholars

In their article "Automatic and Versatile Publications Ranking for Research Institutions and Scholars" (June 2007), Jie Ren and Richard N. Taylor showed that automatic publication ranking can yield results similar to those from manual ranking processes. Although they warned about the sensitivity of measuring for parameter choice, they also suggested that reproducing the rankings validates the use of the instrument for quality assessment.

In addition to numbers of publications, citations are also useful for ranking. Of the 17 journals mentioned in the article, 10 were also in the Science Citation Index-Expanded Version. A publication and citation count of these 10 journals for the same period (1995–2003) yields completely different rankings.

I’ve now extended the table in the article (see users.fmg.uva.nl/leydesdorff/Table1_CACM/) to include publication and citation rates for the top 50 computing graduate programs; the (Spearman) correlation coefficients between the original rankings and the new ones are on the order of 0.5. My rankings are based on the attribution of one full point to an institution for each (co-authored) publication and its corresponding citations. However, proportional attribution does not significantly affect my results. Not only does the order change, but five of the 50 institutions did not have a single publication attributed in the Thompson ISI selection during the same eight years.

I don’t claim that citation-based rankings are better than those published previously. The reliability of bibliometric constructs and their validity as indicators of quality are two different issues.

Loet Leydesdorff
Amsterdam, The Netherlands

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More