Opinion
Computing Applications Forum

Forum

Posted
  1. I Want My Home Network
  2. Consumer-Created Content Key to Broadband
  3. Still Living Off Y2K Spending
  4. Grading Education
  5. Objective Opinion
  6. PDF Peeves
  7. Homemade Hypercomputation
  8. Footnotes

In "A Call for the Home Media Network," (July 2002), Gordon Bell and Jim Gemmell obviously understand where home networking has to go. Unfortunately they grossly underestimate what it takes to get there.

Devising systems that realize the old dream of a truly networked home requires an unprecedented degree of cooperation among the IT, consumer electronics, and entertainment industries. Since no two of these industries seem to be on speaking terms, the challenge is daunting.

The technical problems pale before the business issues. But Bell and Gemmell devote only two cursory paragraphs to the issues of digital rights management. They note that "before willingly participating in the next generation of media distribution, publishers will insist on DRM to protect their content…" The fact is the entertainment industry insists on conditions so draconian they could effectively prevent this next generation from happening.

The record industry is finally realizing that its business model has to change to allow electronic delivery of content for it to survive, but it is giving ground only very, very grudgingly. The movie industry, in most cases, sibling companies of the record firms, remains immovable.

The entertainment industry has legitimate fears of the new digital age, but it has to see past them to the opportunity. It also has to understand that its ability to control what consumers do with their product is limited; we’re past the era in which movie companies control what medium a legitimate DVD release of a film can be played in.

The IT industry, meanwhile, has to recognize the legitimate needs of publishers and establish technical means to protect them while also providing for the rights of consumers. The IT and consumer electronics industries have to work together to provide open standards (a concept even more foreign in CE than in IT) for both rights management and basic networking and interconnections. In the rights management arena, it would also help if we could do better than such brain-dead efforts as the CSS encryption standard for DVDs and the Secure Digital Music Initiative, both of which were trivially broken.

Steve Wildstrom
Washington, DC

Congratulations on the excellent article by Bell and Gemmell. I agree with the authors that demand for such a product is on the rise and people increasingly integrate entertainment devices into their homes. It will be interesting to see which ones belong in the "living room" environment and which ones belong in the "office" environment, as well as the various models for the underlying backbone.

Sherif Yacoub
Palo Alto, CA

I largely agree with the thrust of Bell’s and Gemmell’s article. However, I object to their assumption that publisher desire for DRM is justified.

DRM is absolutely unacceptable in a free society; it prevents individuals from exercising their fair use rights (mandated in the U.S. by the First Amendment). Under the circumstances, the mistakenly reversed sense in the authors’ phrase "digital rights management to protect intellectual property theft" (they surely meant "to protect against…") is absolutely accurate.

Publishers enter into the bargain known as copyright, which requires them to promise the public certain rights (fair use, eventual public domain). Then they use DRM to steal back those rights.

More mundanely, the authors made one unfair comparison between a typical rat’s nest of analog cabling with an atypical tidy home networking setup (in fact, the latter is about as ratty as the analog cabling; the main difference is the rat’s nest is hidden in a closet). The network-closet approach relies on being able to put Ethernet into the walls of the home—a rather daunting proposition requiring a wide mix of skills (network management, wiring, and plastering). Few people can install such a system without professional help.

John Stracke
East Kingdom, U.K.

Authors Respond:

We wrote this article to posit a vision that would stimulate the computing and consumer electronics industries. The last thing we expected was to be viewed as part of a marketing campaign for a (mythical) Mbox. The progression from Audiotron to Videotron to Digital Home Entertainment Center (we dubbed "Mbox") seems to us obvious and inevitable. We will become enthusiastic users of any standard IP-based home media network, regardless of who builds it.

Some readers concurred, but we believe we were optimistic, underestimating business and digital rights management difficulties. Perhaps this criticism is more valid. Presently, the home media network still seems years away. However, we hope our article plays a role in speeding its arrival.

Gordon Bell and Jim Gemmell
San Francisco, CA

Back to Top

Consumer-Created Content Key to Broadband

In her "Staying Connected" column ("Broadband to Go," June 2002), Meg McGinity provides an excellent survey of the current situation in broadband Internet services.

Like most other writers on the subject, McGinity suggests the lack of "content" is holding back broadband deployment. This is correct, but no one seems to have a good idea what content is actually needed. The entertainment and other industries suggest that appropriate content is downloading movies.

We believe this might be an added incentive, but given the large number of alternatives for accessing such content, it’s not likely to be the killer app that sends consumers scurrying to broadband.

A much more likely form of content is consumer-created content, such as high-resolution photos and eventually digital videos. Imagine the number of people signing up for broadband if they could easily attach a large library of high-resolution digital photos and video clips to an email file, sending and receiving them in seconds instead of minutes. Note that such applications require high bandwidth in both directions.

Bob Ellis, Fountain Hills, AZ
Myles Losch, Los Angeles, CA

Back to Top

Still Living Off Y2K Spending

My take from the McKinsey and Co. U.S. Productivity Report ("Viewpoint," July 2002) was not a fear that businesses in certain sectors "will stop investing in IT," as the authors wrote, but how critically important innovative customers are to business. Companies adapting to serve these customers through IT will excel. Otherwise, they will take their business elsewhere.

Moreover, McKinsey also makes the point that competition drives innovation. IT people need to be on the side of deregulation and competition. The lasting benefit to society in the wake of the dot-com bust is not another lesson on the nature of human greed, which will be experienced over and over, as history teaches, but rather in the catapult-like advancement of the Web service technology that occurred as a result of the apparent competition.

Dot-com spending and upgrades due to Y2K fears caused approximately $350 billion in IT spending to be pulled forward in 1999 and the first half of 2000. The industry is still laboring under the fat of that gorging.

Ronnie Ward
Houston, TX

Back to Top

Grading Education

Norris, Soloway, and Sullivan (Examining 25 Years of Technology in U.S. Education, Aug. 2002) are surely correct that "to a first-order approximation, the effect of computing technology over the past 25 years on primary and secondary education has been zero." And no doubt they are correct also that "when certain conditions are met, computing technology has a positive effect on learning and teaching." But their recommended solution to this sorry state of affairs, increased access to computers, indeed, a 1:1 ratio of students and computers, just will not stand scrutiny.

The crucial problem with education in the U.S. today is the lack of well-educated, intellectually able teachers. Moreover, the fact that the capabilities of those entering the teaching profession have been declining for many years is the chief reason why the authors found no evidence that older teachers were the problem. Even if older teachers are less comfortable, with computers than their younger colleagues, they are generally more able, which tends to balance things out. Until something is done about improving the average quality of American schoolteachers, the effect of technology on education will continue to be essentially zero no matter how many computers are in U.S schools.

The analogy the authors use at the end of their article, that with the advent of paperback books in the late 1950s, "education changed when the ratio of books to children was 1:1" points in exactly the opposite direction they intend. The late 1950s saw the beginning of the decline in American educational attainment that continues to this day and that has resulted in the poor standing of U.S. education compared to our main economic competitors. And it saw the start of a rapid decline in the amount of reading done by almost all American children. No, paperback books were not the cause of poorer education and less reading but they were helpless against the forces that caused these declines. Similarly, more computers in schools will have almost no effect on American education until the changes are made in U.S. schools and in teacher education that will attract the number of qualified professionals to its schools that the U.S. needs. Indeed, money spent on providing palm-sized computers to U.S. schoolchildren would be much better spent on improved teacher salaries, making schools more attractive workplaces, and improving our schools of education.

Anthony Ralston
Imperial College, London

Back to Top

Objective Opinion

As a practitioner for many years, I have learned to agree with Edsger Dijkstra. I too have a small head and must live with it. While I agree with Nick Ourosoff’s contention ("Technical Opinion," Aug. 2002), that primitives detract from a pure object-oriented approach, I disagree with his conclusion that they detract from learning that approach. Primitives afford a starting point that can be familiar and arguably more intuitive.

My experience with Smalltalk started in the early 1990s, while employed at a large software vendor, on a shop-floor-planning product. Coming from a multi-traditional language background, I found many aspects of it difficult to understand and become comfortable with, even though I read the books referenced by Ourosoff and several others. It was not until later, when I served as the same company’s representative to IBM’s San Francisco project and was involved with early versions of Java, that I started to embrace the object approach.

Traditional method/procedure calls made more sense than asking an integer to add one to itself or print itself. Being able to start with "System.out.println( i )", after having defined i as an int, seemed more understandable, just as moving in small steady steps pieces from "Hello, World" to Swing was a path I found comfortable and intuitive.

An argument could be made that my problems were caused by my background, but I don’t think my perspective is that unique. My recent academic experience suggests the best approach to learning is still based on building blocks and, even if legacy languages are not emphasized, programming languages, including Smalltalk, still involve at the lowest level sequence, selection, and iteration. Removing foundations and intuitive elements for the sake of purity may slow the process of understanding and acceptance. While for some, primitives may get in the way, for others, they could serve as the basis for growth based on prior acquired skills. I feel the potential benefit in terms of increased understandability outweighs problems caused by loss of purity.

Rich Henders
Batavia, IL

Back to Top

PDF Peeves

Just a quick reply to a Forum letter by Greg Woods "Enough PDF; Give Me HTML" (Aug. 2002, p. 12).

Although I didn’t read the original letter by Bertrand Meyer, I agree with Woods on the so-called "universality" of PDF format documents. I too have had more trouble with PDF files than anything other than MS-Word format files. So far, HTML seems to come closer than anything except plain text to a "universal" format (while still providing some reasonable formatting capabilities), since any machine with a browser can view and print.

But what about other end-user considerations? Let’s call them "requirements," for lack of a better term. Since the publisher has no prior knowledge of the end user’s computing capabilities, here are some desirable document features:

1) The format should be based on open standards, not a proprietary file format. This implies a variety of tools are available for viewing, printing, editing, and searching (this leaves room for both free and commercial tools).

2) The format should allow for at least a modicum of integrity and authenticity checking.

In my view, the free software community (such as GNU/ Linux) has had at least the first part of this problem licked for years now. It’s called SGML. From a publishing perspective, creating a master source document in SGML gives you the best of all output worlds. You have an ASCII text master document, that can be edited with any old text editor on any possible computing platform. From that one master document, given the appropriate tools, the author/publisher can easily create any format needed for book-quality printing (PS), electronic distribution (PDF), or online browsing (HTML). You can also generate RTF, ASCII, and other formats (all with a specified layout).

I believe SGML (along with all the supporting stuff) satisfies the first requirement, as well as the desires of both Woods and Meyer, but as far as I know, only the LinuxDoc project and a few others do it this way. Why don’t more academics publish documents this way (since the basic tools are free)? Other than being brainwashed by marketing hype, or a fear of new things, I have no idea.

As for the second requirement above, PDF comes close, but totally blows the first requirement out of the water. I’m sure there must be plenty of work being done in the second area, but I can’t say I’ve seen much.

Steve Arnold
Lompoc, CA

Back to Top

Homemade Hypercomputation

In their "Viewpoint" column on hypercomputation, (Aug. 2002) Christof Teuscher and Moshe Sipper ask about the relationship of hypercomputing to artificial and natural intelligence.

Hypercomputing is the computing of functions not "computable" in the normal sense. The number of possible functions is at least as great as the number of real numbers (uncountably infinite); the number of computable functions (functions for which there exists a program of finite length made from a finite set of symbols) is only countably infinite. Thus, there are plenty of functions only a hypercomputer can compute.

But functions are not all that the brain does. An example of an "impossible" task is creating a mental model of a 3D scene by looking at a 2D picture. There are any number of ways to arrange the elements in 3-space to produce the same picture, so this task is not a function (in the mathematical sense) at all. You can cast it as a function by defining the task to pick the "best possibility." Then you can program a hypercomputer to generate all (infinite) possibilities, pick the best, present the result and then stop.

The brain does something different: it presents a series of partial results. You can see this process in action by staring at an optical illusion such as a Necker cube (an ambiguous wireframe drawing of a cube). The cube appears to switch back and forth between two different orientations and will keep switching back and forth for as long as you care to watch it. The process never terminates. A hypercomputer inside the brain would settle the matter once and for all. I think optical illusions are evidence that either the brain does not contain a hypercomputer, or it is not being put to good use.

Mark Lutton
Brookline, NH

Back to Top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More