News
Architecture and Hardware

Solid State Drives Transform Data Centers

Posted
A NAND flash memory board.
Market research firm Gartner predicts some form of solid-state drive technology will be present in 90 percent of enterprise storage environments by 2017.

Pricing and the ability to get more bang for the buck are among the factors driving enterprises to eye the use of solid-state drive (SSD) technology for their data centers. In fact, by 2017, some form of SSD technology will be in 90 percent of enterprise storage environments, compared with less than 20 percent in 2012, according to market research firm Gartner Inc.

Also playing a key role is the rise of big data, digital data and a plethora of applications, which has left enterprises struggling to find the performance they need with their traditional Hard Disk Drive (HDD) infrastructure, according to International Data Corp. (IDC). "The use of solid-state storage (SSS) in conjunction with solid state drives (SSDs) will play an important role in transforming performance as well as use cases for enterprise application data,’’ the firm said. IDC is forecasting $1.2 billion in revenues in the SSS array market by 2015.

SSD technology is based on NAND technology, which "is not hindered by mechanics and is on a steep innovation curve," wrote analysts Gene Ruth and David J. Cappuccio in the report, "The Coming Revolution for Data Center Efficiency Will Be Driven by SSD Technology." SSD technologies offer both advantages and challenges, depending on their proximity to the processor, the authors say. An SSD can be inserted into a server and used as a dedicated storage device on a shared network, or more commonly, within a shared disk array. While this provides "extraordinary improvements in performance," it comes at the expense of reduced scalability, reduced workload mobility, and greater operational complexity, Ruth and Cappuccio say. Conversely, when placed within an external shared storage array, SSD technology "enables a reduced operational workload and allows maximum workload mobility, increased protection, and scalability."

In terms of availability, there should be no impact, says Ruth. "If there was, application of SSDs would be highly constrained in enterprise IT infrastructures." OEMs have been carefully to not degrade reliability/availability as they integrate SSDs into their arrays, he explains.

SSDs should be considered by IT for help mitigating the overwhelming amounts of data flooding into data centers, the analysts say. Although they are not inexpensive, the authors add that "SSDs have instigated a rethinking of storage array design, resulting in dramatic equipment efficiencies measured in performance, floor space and power consumption."

The price of SSDs is becoming "more palatable," observes Mark Peters, a senior analyst at Enterprise Strategy Group, Inc. Far more important, he says, is the fact that tools such as automated tiering and caching essentially extend the ability for a relatively small amount of flash capacity to have a positive, "turbo-charging" impact over as much of a user’s data as is required.

"The amount and type of that flash ‘pool’ will itself define the extent of the impact that is possible,’’ explains Peters, "but at least such tools permit the positive impact to be spread over a greater spinning-storage-capacity, and not limit the available flash to being fixed (persistent) storage for a limited amount of data/application." This latter option is not a bad thing, he adds, but rather a different approach.

Sales of SSDs are on the rise in what Peters calls a chicken-and-egg scenario: There are more options and more demand for performance — not to mention the fact that well-implemented flash memory can actually reduce the overall cost of a full storage infrastructure.

Since all storage is about economics, Peters maintains, SSDs are viable for any company’s data center. "It’s the economic sense/system integration/software function that makes solid state work in the [data center]. Any type of company and increasingly — because it’s often deployed as a part of a dynamic hierarchy rather than a persistent tier so as to effectively get more bang for your buck — any type of application."

SSDs are a convenient way to implement flash in a disk package so that it can be implemented in consumer as well as enterprise storage systems. Hitachi Data Systems (HDS), for example, has packaged the flash technology into a Flash Module Device (FMD), with a special controller to increase durability, performance, and capacity for enterprise storage requirements, according to Hubert Yoshida, vice president and Chief Technology Officer at HDS. "We still offer the SSD that other storage venders offer, but we have seen a massive increase in interest and installation of our FMD over the traditional SSD,’’ he says.

Impact on energy requirements

Another reason to consider SSDs is their environment impact on the data center. Buying devices that can reduce energy consumption on a per-gigabyte basis by 80 percent – a figure that is growing – is compelling, according to Gartner. "Couple this with the impact of combinatorial devices on consumption (SSD + SATA) versus traditional HDD and the energy reduction, and subsequent reduction in operational costs, can be dramatic."

Power and cooling consumption is tiny for SSDs compared to spinning disks, notes Peters. Yet, he adds that this is more of a nice-to-have feature than a necessity in the majority of environments. "Where it is a problem it is a huge problem, but most sites just view power and its cost as a necessary evil,’’ Peters says. In terms of it being a key factor in decision-making, power and cooling is only a concern among three percent of U.S. users, according to ESG research. It is slightly higher (4 percent) in Western Europe, he says, where power typically costs a lot more and availability is genuinely limited in more metropolitan areas. Peters says, though, that even there, on average, it’s still not that high of a factor.

Before considering whether SSD technology is right for your data center, Gartner recommends assessing past and current application workload complexity and performance and estimating what the future might hold before modernizing storage and servers with SSDs. Clearly, the technology holds appeal, especially when IT requires improved performance with short implementation times.

When the need arises, Gartner also advises SSDs be included "in a set of designated servers as a quick but high operational management overhead approach to accelerate performance. For highly variable workloads not conducive to auto-tiering algorithms and that require high availability, use pure SSD storage arrays on an as-needed basis."

In addition, the firm also recommends installing SSDs within existing storage arrays as a cache or high-performance pool "to benefit from operational simplicity and reduce architectural complexity."

Esther Shein is a freelance technology and business writer based in the Boston area.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More