Innovation in computing systems thrives at the beginning and the end of technology cycles. When facing the limits of an existing technology or contemplating the applications of a brand new one, system designers are at their creative best. The past decade has been rich on both fronts, particularly for computer architects. CMOS technology scaling is no longer yielding the energy savings it used to provide across generations, resulting in severe thermal constraints leading to increased attention to so called "wimpy processors." These processors achieve high performance and energy efficiency by using a larger number of low-to-modest-speed CPU cores. Also in the past decade, the consumer electronics industry's investment in non-volatile storage technologies has resulted in NAND FLASH devices that are becoming competitive for general-purpose computing usage as they fit nicely within the huge cost/performance gap between DRAM and magnetic disks. FLASH-based storage devices are over 100 times faster than disks, although at over 10 times the cost per byte stored.
The emergence of wimpy processors and FLASH met a promising deployment scenario in the field of large-scale data centers for Internet services. These warehouse-scale computing (WSC) systems tend to run workloads that are rich in request-level parallelisma match for the increased parallelism of wimpy CPUsand are very data intensivea match for the high input-output rates that are possible with FLASH technology. The energy efficiency potential of both these technologies could help lower the substantial energy-related costs of WSCs.
Given all this potential, how can we explain the rather slow pace of adoption of these technologies in commercial WSCs? At first glance, wimpy processors and FLASH seem compelling enough to fit within existing data center hardware and software architectures without the need for substantial redesign of major infrastructure components, thus facilitating rapid adoption. In reality, there are obstacles to extracting the maximum value from them. Hölzle1 summarized some of the challenges facing wimpy cores in commercial deployments, including parallelization overheads (Amdahl's Law) and programmer productivity concerns. FLASH adoption has also suffered due to software related issues. FLASH will not fully replace disks for most workloads due to its higher costs, therefore storage system software must be adapted to use both FLASH and disk drives effectively.
FAWN combines wimpy cores and FLASH to create an efficient, high-throughput, key-value storage system.
The lesson here is that to extract the most value from compelling new technology one often needs to consider the system more broadly, and rethink how applications and infrastructure components might be changed in light of new hardware component characteristics. This is precisely what the authors of the following article on FAWN have done.
FAWN presents a new storage hardware architecture that takes advantage of wimpy cores and FLASH devices, but does so alongside a new datastore software system infrastructure (FAWN-DS) that is specifically targeted to the new hardware component characteristics. The system is not a generic distributed storage system, but one that is specialized for workloads that require high rates of key-value lookup queries. By co-designing the hardware and software, and by targeting the system for a particular (but compelling) use case, the authors present a solution that has greater potential to realize the full value of new energy-efficient components. Their approach, which includes building and experimenting with actual software and hardware artifacts, is a model worthy of being followed by future systems research projects.
©2011 ACM 0001-0782/11/0700 $10.00
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from email@example.com or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.