November 1991 - Vol. 34 No. 11

November 1991 issue cover image

Features

Opinion

Supercomputing policy links science and the “C” word

Competition, a concept nary whispered in pure science circles, might just have been the catalyst that finally drove the High Performance Computing and Communication (HPCC) policy over the Hill. It has taken many years, and countless government studies, to help legislators even comprehend and appreciate the potential of high-performance computing technology in the U.S. But toss in the competitive angle, and the story needs little translation.
Research and Advances

The 21st ACM North American computer chess championship

After twenty years of traveling from city to city across the United States, the ACM North American Computer Chess Championship came back to the place of its birth, the New York Hilton Hotel, where the competitions began in 1970. This latest five-round event ended in a two-way tie for first place between MEPHISTO and DEEP THOUGHT/88. Finishing in a two-way tie for third place were HITECH and M CHESS. A total of 10 teams participated, and the level of play was at the low grandmaster level. A special three-round end-game championship was won by MEPHISTO, who also captured the prize for the best Small Computing System. A total of $8000 in prizes was divided up among the winners.
Research and Advances

Massively distributed computing and factoring large integers

Over the last 15 years the increased availability of computers and the introduction of the RSA cryptosystem has led to a number of new and remarkable algorithms for finding the prime factors of large integers. Factoring numbers is an arithmetic problem so simple to understand that school children are asked to do it. While multiplying or adding two very large numbers is simple and can be done quite quickly, the age-old problem of trying to find a number that divides another number still has no simple solution. Computer science has reached a point where it is starting to custom tailor the design of computers toward solving specific problems. This pracnique will discuss some of the more recent algorithms for factoring large numbers and how networks of computers can be used to run these algorithms quickly. Since this is a general exposition, we do not give detailed mathematical descriptions of the algorithms. We also allow ourselves to be somewhat casual with mathematical notation in places and hope that the mathematically sophisticated will forgive the looseness.
Research and Advances

CAPS: a coding aid for PASM

Programming parallel machines is very difficult. First, generating an algorithm requires the programmer to assimilate the interactions of multiple threads of control. Second, synchronization and communication among the threads must be addressed to avoid contention and deadlock. Then, once the program is executing on the parallel system and does not function correctly or performs poorly, the debugging of multiple threads is a complicated problem [21]. Additionally, debugging software is an activity that requires systematic attention to detail. Success is a function of the experienced individual involved and the tools employed. The ability to efficiently debug software requires the wisdom to know what questions to ask, the ability to analyze the answers received, and the knowledge to formulate the best next question. To aid in this interactive process, the programmer needs information about the run-time behavior of the program.
Opinion

Technical correspondence

I read with interest Peter Pearson's article, “Fast Hashing of Variable-Length Text Strings” (June 1990, pp. 677-680). In it he defines a hash function, given a text C1 … CN, by Exclusive OR'ing the bytes and modifying each intermediate result through a table of 256 randomish bytes.
Opinion

The human element

In past issues we have discussed various system-related disasters and their causes, both accidental and intentional. In almost all cases it is possible to allocate to people—directly or indirectly—those difficulties allegedly attributed to “computer problems.” But too much effort seems directed at placing blame and identifying scapegoats, and not enough on learning from experiences and avoiding such problems [1,2,5,6,7]. Besides, the real causes may implicitly or explicitly involve a multiplicity of developers, customers, users, operators, administrators, others involve with computer and communication systems, and sometimes even unsuspecting bystanders. In a few cases the physical environment also contributes, e.g., power outages, floods, extreme weather, lightning, and earthquakes. Even in those cases there may have been system people who failed to anticipate the possible effects. In principle, at least, we can design redundantly distributed systems that are able to withstand certain hardware faults, component unavailabilities, extreme delays, human errors, malicious misuse, and even “acts of God”—at least within limits. Nevertheless, in surprisingly many systems (including systems designed to provide continuous availability), an entire system can be brought to a screeching halt by a simple event just as by a complex one [4].

Recent Issues

  1. November 2024 CACM cover
    November 2024 Vol. 67 No. 11
  2. October 2024 CACM cover
    October 2024 Vol. 67 No. 10
  3. September 2024 CACM cover
    September 2024 Vol. 67 No. 9
  4. August 2024 CACM cover
    August 2024 Vol. 67 No. 8