December 1990 - Vol. 33 No. 12

December 1990 issue cover image

Features

Opinion

Computing perspectives: the rise of the VLSI processor

Around 1970 Intel discovered it could put 2,000 transistors—or perhaps a few more—on a single NMOS chip. In retrospect, this may be said to mark the beginning of very large-scale integration (VLSI), an event which had been long heralded, but had been seemingly slow to come. At the time, it went almost unnoticed in the computer industry. This was partly because 2,000 transistors fell far short of what was needed to put a processor on a chip, but also because the industry was busy exploiting medium-scale integration (MSI) in the logic family known as TTL. Based on bipolar transistors, and a wide range of parts containing a few logical elements—typically two flip-flops or up to 16 gates in various combinations—TTL was highly successful. It was fast and versatile, and established new standards for cost effectiveness and reliability. Indeed, in an improved form and with better process technology, TTL is still widely used. In 1970, NMOS seemed a step backward as far as speed was concerned.Intel did, however, find a customer for its new process; it was a company that was interested in a pocket calculator chip. Intel was able to show that a programmable device would be preferable on economic grounds to a special-purpose device. The outcome was the chip that was later put on the market as the Intel 4004. Steady progress continued, and led to further developments: In April 1972 came the Intel 8008 which comprised 3,300 transistors, and then in April 1974 came the 8080 which had 4,500 transistors. The 8080 was the basis of the Altar 8800 which some people regard as the ancestor of the modern personal computer. It was offered in the form of a kit in January 1975. Other semiconductor manufacturers then entered the field: Motorola introduced the 6800 and MOS Technology Inc. introduced the 6502.Microcomputers had great success in the personal computer market which grew up alongside the older industry, but was largely disconnected from it. Minicomputers were based on TTL and were faster than microcomputers. With instruction sets of their own design and with proprietary software, manufacturers of minicomputers felt secure in their well-established markets. It was not until the mid-1980s that they began to come to terms with the idea that one day they might find themselves basing some of their products on microprocessors taken from the catalogs of semiconductor manufacturers, over whose instruction sets they had no control. They were even less prepared for the idea that personal computers, in an enhanced form known as workstations, would eventually come to challenge the traditional minicomputer. This is what has happened—a minicomputer has become nothing more than a workstation in a larger box and provided with a wider range of peripheral and communication equipment.As time has passed, the number of CMOS transistors that can be put on a single chip has increased steadily and dramatically. While this has been primarily because improvements in process technology have enabled semiconductor manufacturers to make the transistors smaller, it has also been helped by the fact that chips have tended to become larger. It is a consequence of the laws of physics that scaling the transistors down in size makes them operate faster. As a result, processors have steadily increased in speed. It would not have been possible, however, to take full advantage of faster transistors if the increase in the number that could be put on a chip had not led to a reduction in the total number of chips required. This is because of the importance of signal propagation time and the need to reduce it as the transistors become faster. It takes much less time to send a signal from one part of the chip to another part than it does to send a signal from one chip to another.The progress that has been made during the last three or four years is well illustrated by comparing the MIPS R2000 processor developed in 1986 with two-micron technology, with the Intel i860 developed in 1989. The former is based on a RISC processor which takes up about half the available space. This would not have left enough space for more than a very small amount of cache memory. Instead the designer included the cache control circuits for off-chip instruction and data caches. The remaining space, amounting to about one-third of the whole was put to good use to accommodate a Memory Management Unit (MMU) with a Translation Look Aside Buffer (TLB) of generous proportions. At this stage in the evolution of processor design, the importance of RISC philosophy in making the processor as small as it was will be appreciated. A processor of the same power designed along pre-RISC lines would have taken up the entire chip, leaving no space for anything else.When the Intel i860 processor was developed three years later, it had become possible to accommodate on the chip, not only the units mentioned above, but also two caches—one for data and one for instructions—and a highly parallel floating point coprocessor. This was possible because the silicon area was greater by a factor of slightly more than 2, and the amount of space occupied by a transistor less by a factor of 2.5. This gave a five-fold effective increase in the space available. The space occupied by the basic RISC processor itself is only 10% of the whole as compared with 50% on the R2000. About 35% is used for the floating point coprocessor and 20% for the memory management and bus control. This left about 35% to be used for cache memory.There are about one million transistors on the i860—that is 10 times as many as on the R2000, not 5 times as many as the above figures would imply. This is because much of the additional space is used for memory, and memory is very dense in transistors. When still more equivalent space on the silicon becomes available, designers who are primarily interested in high-speed operation will probably use the greater part of it for more memory, perhaps even providing two levels of cache on the chip. CMOS microprocessors have now pushed up to what used to be regarded as the top end of the minicomputer range and will no doubt go further as the transistor size is further reduced.Bipolar transistors have followed CMOS transistors in becoming smaller, although there has been a lag. This is mainly because of the intrinsically more complex nature of the bipolar process; but it is also partly because the great success of CMOS technology has led the semiconductor industry to concentrate its resources on it. Bipolar technology will always suffer from the handicap that it takes twice as many transistors to make a gate as it does in CMOS.The time to send a signal from one place to another depends on the amount of power available to charge the capacitance of the interconnecting wires. This capacitance is much greater for inter-chip wiring than for on-chip wiring. In the case of CMOS, which is very low-power technology, it is difficult to provide enough power to drive inter-chip wiring at a high speed. The premium placed on putting everything on the same chip is, therefore, very great.Much more power is available with bipolar circuits and the premium is not nearly so great. For this reason it has been possible to build multi-chip processors using gate arrays that take full advantage of the increasingly high speed of available bipolar technology. It is presently the case that all very fast computers on the market use multi-chip bipolar processors.Nevertheless, as switching speeds have become higher it has become necessary to develop interconnect systems that are faster than traditional printed circuit boards. It is becoming more and more difficult to do this as switching speeds continue to increase. In consequence, bipolar technology is approaching the point—reached earlier with CMOS—when further advance requires that all those units of a processor that need to communicate at high speed shall be on the same chip. Fortunately, we are in sight of achieving this. It will soon be possible to implement, in custom bipolar technology on a single chip, a processor similar to the R2000.Such a processor may be expected to show a spectacular increase of speed compared with multi-chip implementations based on similar technology, but using gate arrays. However, as it becomes possible to put even more transistors on a single chip, it may be that the balance of advantage will lie with CMOS. This is because it takes at least four times as many transistors to implement a memory cell in bipolar as it does in CMOS. Since any processor, especially a CMOS processor, gains greatly in performance by having a large amount of on-chip memory, this advantage could well tip the balance in favor of CMOS.The advantage that would result from being able to put CMOS transistors and bipolar transistors on the same chip has not gone unnoticed in the industry. Active development is proceeding in this area, under the generic name BiCMOS. BiCMOS is also of interest for analogue integrated circuits.If the BiCMOS process were optimized for bipolar transistors it would be possible to have a very high-performance bipolar processor with CMOS on-chip memory. If the bipolar transistors were of lower-performance levels they would still be of value for driving off-chip connections and also for driving long-distance connections on the chip itself.A pure bipolar chip, with a million transistors on it, will dissipate at least 50 watts, probably a good deal more. Removing the heat presents problems, but these are far from being insuperable. More severe problems are encountered in supplying the power to the chip and distributing it without a serious voltage drop or without incurring unwanted coupling. Design tools to help with these problems are lacking. A BiCMOS chip of similar size will dissipate much less power. On the other hand, BiCMOS will undoubtedly bring a spate of problems of its own, particularly as the noise characteristics of CMOS and bipolar circuits are very different.CMOS, bipolar, and BiCMOS technologies are all in a fluid state of evolution. It is possible to make projections about what may happen in the short term, but what will happen in the long term can only be a matter of guess work. Moreover, designing a computer is an exercise in system design and the overall performance depends on the statistical properties of programs as much as on the performance of the individual components. It would be a bold person who would attempt any firm predictions.And then, finally, there is the challenge of gallium arsenide. A colleague, with whom I recently corresponded, put it very well when he described gallium arsenide as the Wankel engine of the semiconductor industry!
Research and Advances

Real-time data acquisition at mission control

Perhaps one of the most powerful symbols of the United States' technological prowess is the Mission Control Center (MCC) at the Lyndon B. Johnson Space Center in Houston. The rooms at Mission Control have been witness to major milestones in the history of American technology such as the first lunar landing, the rescue of Skylab, and the first launch of the Space Shuttle. When Mission Control was first activated in the early 1960s it was truly a technological marvel. This facility, however, has received only modest upgrades since the Apollo program. Until recently it maintained a mainframe-based architecture that displayed data and left the job of data analysis to flight controllers. The display technology utilized in this system was monochrome and primarily displayed text information with limited graphics (photo 1).An example display of 250 communication parameters is shown in Figure 1. The mainframe processed incoming data and displayed it to the flight controllers; however it performed few functions to convert raw data into information. The job of converting data into information upon which flight decisions could be made was performed by the flight controllers. In some cases, where additional computational support was required, small offline personal computers were added to the complex. Flight controllers visually copied data off the console display screens, and manually entered the data into the small personal computers where offline analysis could be performed.Although this system was technologically outdated, it contained years of customizing efforts and served NASA well through the early Space Shuttle program. Several factors are now driving NASA to change the architecture of Mission Control to accommodate advanced automation. First is the requirement to support an increased flight rate without major growth in the number of personnel assigned to flight control duties.A second major concern is loss of corporate knowledge due to the unique bimodal age distribution of NASA staff. Hiring freezes between the Apollo and Shuttle programs have resulted in NASA being composed of two primary groups. Approximately half of NASA consists of Apollo veterans within five years of retirement. The other half consists of personnel under the age of 35 with Shuttle-only experience. NASA considers it highly desirable to capture the corporate knowledge of the Apollo veterans in knowledge-based systems before they retire. Because the mainframe complex is primarily oriented to data display, it is a poor environment for capturing and utilizing knowledge.These factors have resulted in aggressive efforts by NASA's Mission Operations Directorate to utilize the following: a distributed system of Unix engineering-class workstations to run a mix of online real-time expert systems, and traditional automation to allow flight controllers to perform more tasks and to capture the corporate knowledge of senior personnel. Starting with the first flight of the Space Shuttle after the Challenger accident, the Real-Time Data System (RTDS) has played an increasingly significant role in the flight-critical decision-making process.
Research and Advances

An empirical study of the reliability of UNIX utilities

The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final section presents concluding remarks, including suggestions for avoiding the types of problems detected by our study and some commentary on the bugs we found. We include an Appendix with the user manual pages for fuzz and ptyjig.
Research and Advances

Experiences with the Amoeba distributed operating system

The Amoeba project is a research effort aimed at understanding how to connect multiple computers in a seamless way [16, 17, 26, 27, 31]. The basic idea is to provide the users with the illusion of a single powerful timesharing system, when, in fact, the system is implemented on a collection of machines, potentially distributed among several countries. This research has led to the design and implementation of the Amoeba distributed operating system, which is being used as a prototype and vehicle for further research. In this article we will describe the current state of the system (Amoeba 4.0), and show some of the lessons we have learned designing and using it over the past eight years. We will also discuss how this experience has influenced our plans for the next version, Amoeba 5.0. Amoeba was originally designed and implemented at the Vrije Universiteit in Amsterdam, and is now being jointly developed there and at the Centrum voor Wiskunde en Informatica, also in Amsterdam. The chief goal of this work is to build a distributed system that is transparent to the users. This concept can best be illustrated by contrasting it with a network operating system, in which each machine retains its own identity. With a network operating system, each user logs into one specific machine—his home machine. When a program is started, it executes on the home machine, unless the user gives an explicit command to run it elsewhere. Similarly, files are local unless a remote file system is explicitly mounted or files are explicitly copied. In short, the user is clearly aware that multiple independent computers exist, and must deal with them explicitly. In contrast, users effectively log into a transparent distributed system as a whole, rather than to any specific machine. When a program is run, the system—not the user—decides upon the best place to run it. The user is not even aware of this choice. Finally, there is a single, system-wide file system. The files in a single directory may be located on different machines, possibly in different countries. There is no concept of file transfer, uploading or downloading from servers, or mounting remote file systems. A file's position in the directory hierarchy has no relation to its location. The remainder of this article will describe Amoeba and the lessons we have learned from building it. In the next section, we will give a technical overview of Amoeba as it currently stands. Since Amoeba uses the client-server model, we will then describe some of the more important servers that have been implemented so far. This is followed by a description of how wide-area networks are handled. Then we will discuss a number of applications that run on Amoeba. Measurements have shown Amoeba to be fast, so we will present some of our data. After that, we will discuss the successes and failures we have encountered, so that others may profit from those ideas that have worked out well and avoid those that have not. Finally we conclude with a very brief comparison between Amoeba and other systems. Before describing the software, however, it is worth saying something about the system architecture on which Amoeba runs.
Opinion

Inside RISKS: risks in medical electronics

The RISKS Forum has had many accounts of annoying errors, expensive breakdowns, privacy abuses, security breaches, and potential safety hazards. However, postings describing documented serious injuries or deaths that scan be unequivocally attributed to deficiencies in the design or implementation of computer-controlled systems are very rare. A tragic exception was a series of accidents which occurred between 1985 and 1987 involving a computer-controlled radiation therapy machine.

Recent Issues

  1. November 2024 CACM cover
    November 2024 Vol. 67 No. 11
  2. October 2024 CACM cover
    October 2024 Vol. 67 No. 10
  3. September 2024 CACM cover
    September 2024 Vol. 67 No. 9
  4. August 2024 CACM cover
    August 2024 Vol. 67 No. 8