Current computer hardware technology is based on silicon chips. Since its invention in 1958 and the first commercial introduction in 1961, the advancement of this technology has been unprecedented, with a doubling of computing speed and memory capacity approximately every 18 months (the so-called Moore’s Law). Past improvements on speed and memory size have been achieved by reducing the size of the circuit; however, when circuits are too close together, on the order of molecules, various problems can occur to prevent stable computing. It is thought that further advancement will slow to a halt, ranging from the most pessimistic extreme of approximately five years to a more optimistic 20 years. This does not mean the end of the current technology: silicon-based approaches have made truly remarkable advancements in the past with massive research and investment. Whatever power level is achieved when silicon-based computing technology reaches the limit, it can still be used for many years to come. With the anticipation of the end of the era, however, it is natural to be motivated to search for completely new techniques.
It should be noted that the computing paradigms discussed in the articles assembled here do not necessarily aim to replace current technology. Some techniques are directed toward certain types of problems, such as computationally difficult problems, while others focus on special types of applications. Although practical, extensive, everyday use of most of these techniques as computing devices is yet to be seen, these ideas have stimulated the scientific community by their fundamental nature, their intrinsic novelty, and their potential as the basis for new forms of information processing and applications.
The research and development work for these new paradigms has mostly been reported in literature outside of the computer science discipline, such as physics and chemistry. One major reason for this is that each technology is in its infancy, in terms of development, and so has been focused primarily on basic implementation issues. However, as these paradigms become more advanced, there are more opportunities for computer scientists to become involved. One area is the hardware aspect of the technologies, such as architectural design. Another is the software aspect, developing computing schemes and algorithms specific to these technologies. An analogy is the development of new algorithms for parallel computers that extend or replace sequential algorithms. Ideas from one domain may be applied to other domains, whether the application is to silicon-based technology or to the new paradigm. Such interdomain technical transfers may be possible to achieve in both hardware and software.
Since this special section covers such diverse domains, classifying the articles is not a simple matter. For convenience, however, the articles are divided into two parts. The articles in the first part encompass domains that are primarily based on nanoscale technology; the areas covered include computing schemes based on nanowires, carbon nanotubes, organic molecules, bio-DNA, and quantum physics. The second part contains articles on special forms of computing including optical, micro/nanofluidic, and amoeba-based chaotic.
With the anticipation of the end of the era, it is natural to be motivated to search for completely new techniques.
Atomic, Molecular, and Quantum Computing
The current silicon-based technology has been a "top-down" approach, starting at the macroscopic level, going downward to smaller and smaller size while achieving faster computing and higher density of memory along the way. This approach has limitations for further down-scaling, as previously described. The articles in this part take a totally opposite direction, "bottom-up." Their basic computing elements are atoms and molecules of nano-scale. Computers will be built by assembling these elements, thus avoiding the problems of the top-down approach.
Robinett, Snider, Kuekes, and Williams discuss one promising aspect of atomic and molecular computing where each element (switch) is on the order of 10 nanometers on a side. We can make programmable arrays of switches at a local density of 1012 (one trillion) elements per square centimeter. These elements are connected by wires and serve as memory or logic circuits—the building blocks of a computer. In addition to the issue of how to assemble these tiny components, one major challenge is how to perform tolerant computing using "crummy" elements that are not 100% reliable. This is similar to the situations of earlier times where vacuum tubes were used, but now it exists at a much smaller scale. One major strength of this technique is that the scaling problem of assembling a large number of computing elements has already been achieved, while this has not been accomplished in many other areas. The authors also suggest a hybrid system using this technique and the traditional silicon-based technology to further extend Moore’s Law.
Computing by carbon nanotubes, with such positive features as nanometer-sized diameters, potential speeds in the pico-second range, and less electron scattering, is very attractive as an approach to next-generation computing components. The article by Kong provides an overview of the current state of this technique. Basic logic gates, such as NOR and NAND, and memory cells have been constructed. However, integrating these gates to multi-bit adders has yet to be implemented, and this breakthrough will be one of the major advances that are necessary for this technique to become a full-scale computing technology. Commercial applications of carbon nanotubes for logic devices may come sooner under an alternative operating mechanism, such as as nano-electromechanical switches for high-speed, high-density memory in RAM and flash memory.
Somewhat similar to carbon nanotubes, basic computing elements based on organic and other types of molecules have recently been reported. The article by Stadler documents the current state of this field. Techniques based on bulk materials such as polymers have already been commercialized. When the size is scaled down, especially to the level of individual molecules, however, the techniques are still in the research stage. Even so, basic computing elements, such as logic gates that include cascading, have been realized in some cases. Major advantages of this field are: miniaturization provided by a single-molecule technology; the expected cost reduction when components are mass produced by chemical synthesis; and the higher bio-compatibility of organic molecules when compared to silicon. As a result, the research activity in this field has recently increased and it remains as a potential candidate for future computing.
DNA computing, or more generally bio-DNA computing, was first developed by Leonard M. Adleman at USC in 1994. In this technique, information is encoded on DNA, which is then used to perform biomolecular processes to achieve targeted computing. This field has experienced significant advancement since its inception and in their article in this section, Reif and LaBean survey some of the recent developments. The single element speed of this computing is slow, on the order of 102 to 103 seconds, but the use of massively parallel elements may provide a significantly faster overall effective speed. Some of the specific features of this computing scheme are the scalability and the autonomous self-assembly processes of the nano-scale DNA molecules. Such processes allow easy and efficient operation, and have been implemented in a growing number of laboratories around the world.
Quantum computing, which is based on quantum physics, employs a totally different principle from the silicon-based technology. In a quantum computer, each quantum bit or qubit can be 0, 1, or a superposition of states—0 and 1 at the same time. This means that a qubit can behave differently than a classical bit by exhibiting quantum effects such as interference or quantum nonlocality. Such quantum computing has been applied to a variety of computationally difficult problems, including search, cryptography, and number theory. Bacon and Leung give a good overview of the subject, covering subdomains of the field, providing a short tutorial, and listing significant historical events. Development of quantum computing algorithms is of interest to many computer scientists. Commercial products have already been in the market, and this field is likely to grow in the future to complement silicon-based computing.
As these paradigms become more advanced, there are more opportunities for computer scientists to become involved.
Special Forms of Computing
The articles in the second part of the special section address areas that interpret the term "computing" in a broad sense. Optical computing, discussed by Abdeldayem and Frazier, has a revolutionary potential for increased computing speed. It could leap to the order of femto-second (10-15) speeds, achieving 105 times faster computation than the current silicon-based technology. Because of this huge potential, there was much excitement about this technique, especially during the 1980s and early 1990s. The enthusiasm has diminished somewhat since then because of technical difficulties. One such difficulty is cascading, a common problem also seen in some other new paradigms. While a specific computer architecture has yet to be constructed, logic gates such as AND and XOR have been built, and an all-optical half adder was reported recently as well. My perspective is that this technology will eventually become reality, revolutionizing the computing field. For this to occur, it will require a major breakthrough, and it is indeterminable exactly how and when it will happen. Perhaps optical computing will initially be used in conjunction with traditional silicon-based technology to enhance the overall speed.
Micro/nanofluidics, because of its very small size, has many advantages: processing times are typically much shorter than equivalent macroscale processes; thousands of channels can be placed on a small planar surface (lab-on-a-chip) allowing for parallel operations; and the sizes are close to those of individual cells and molecules. Potential applications include areas such as biomedicine and engineering. Micro/nanofluidic computing, discussed by Marr and Munakata, is a special-purpose computing paradigm incorporated within a micro/nanofluidics platform. Because of its slow speed, it does not aim to replace traditional silicon-based technology; instead, the major objective is to enhance the functionality of micro/nanofluidics by directly incorporating a computing capability and maintaining its inherent advantages. To date, computing elements such as logic gates, adders, and memory have been created and some techniques even allow cascading, which could lead to scalable integrated circuits.
The article by Aono, Hara, and Aihara introduces an intriguing neural computing scheme based on chaotic behavior in amoeba. Because of its very slow speed, it would hardly be a practical computing device by itself. However, it is interesting from a scientific point of view for the following reasons: it is the first actual, non-silicon-based implementation of a chaotic neuron model; it exhibits an interesting problem-solving capability in which the speed may not be an issue; and there are many chaotic phenomena in nature such as lasers and certain properties observed in atoms and molecules. The dynamic speed of these phenomena is very fast; some can easily surpass their current silicon-based counterparts. When the problem-solving techniques in this article are realized in these areas, it could lead to a new fast computing paradigm.
One of the most common questions asked is when will the techniques that are currently under development be extensively employed. I recall that when two previous AI-themed Communications issues for which I served as the guest editor (March 1994 and November 1995) were published, some readers were skeptical about practical, everyday applications in the field. Since then, terms such as intelligent agents and smart computing have become very common in many applications. It is difficult to accurately predict, but many of the nascent technologies described in the articles in this section may become significant computing domains in the next five to 15 years.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment