While Moore’s Law is alive and well for the next decade, fundamental power limitations, decreasing instruction-level parallelism, and overall design complexity limit the future of computing. The industry needs to consider new vectors to continue to deliver improvements that keep pace with market trends and provide value to end users.
Moore’s Law made the case for continued wafer- and die-size growth, defect density reduction, and increased transistor density as manufacturing matured and technology scaled. It also sparked a revolution in microprocessor architecture innovation and design techniques that would deliver enormous computing power. In 1989, using Moore’s Law and extrapolating 20 years of trend data, Intel predicted the microprocessor of the year 2000, with a projected 50 million transistors on a die measuring 1.0-inch square, would operate at over 250MHz and perform over 750MIPS. While these projections seemed bold at the time, history has shown Intel underestimated the potential for gains in both frequency and performance that resulted from advances in process technology, microarchitecture, and design sciences. In the last decade alone, the technology scaled from 1.0 micron to 0.18 micron; frequency increased 50 times.
The economic and physics challenges of maintaining Moore’s Law continue to exist—modern fabrications are well over $2 billion and ever finer geometries push processes to their limits. These challenges do not fundamentally deter Moore’s Law for at least the next decade. However, if current trends hold true, with microarchitectures continuing to become more complex and intricate as we achieve more breakthroughs in performance and scaling, we are moving full-speed toward a brick wall—power consumption.
The power consumption of microprocessors is expected to reach 18 kilowatts by 2008. If we look at the platform as a whole, the situation gets even worse in terms of power dissipation and delivery. These predicted powers are excessive and prohibitively large for any practical applications, and we will have to evaluate alternatives to continue delivering computational performance.
There is opportunity in architecture innovation to continue to improve performance of microprocessors. For example, general-purpose logic performance is expensive in terms of power. Hence, it’s more efficient to employ special-function logic blocks to deliver application-specific MIPS. Moreover, since many applications almost never operate at maximum power, we can take advantage of design techniques that increase the processor’s execution core efficiency by adding only modest additional logic and power. One solution is to employ multiple CPUs on a single die, where multiple CPUs provide near linear performance with die size on transaction workloads and share a large L2 cache. This approach is more suitable for server class computers, where they typically run thread or transaction workloads, and other aspects, such as reliability, are equally important as single-execution peak performance. Finally, let’s consider multithreaded architecture, where a single CPU is augmented to look like two or more CPUs to software. It adds about 10% logic to the CPU design, increasing maximum power by more than 10%, but can increase throughput by about 30%.
There will always be a need for performance. As in 1989, here’s a bold prediction: The microprocessor of 2010 will have one billion transistors on a die, operate at 2030GHz, and perform over one trillion operations per second. But at what cost? Power, not manufacturability, is the challenge we face. Our industry needs to rise to the occasion and deliver innovative solutions that break through the power wall so we continue to deliver value to end users.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment