Post by Robin Harris (thank yo)
StorageMojo offered its soapbox to any vendors willing to weigh in on the question of whether enterprise arrays should be built from flash SSDs or not. Ed Lee, architect at Tintri, formerly of Data Domain and a Berkeley Ph.D, elected to respond. It is a long piece but rich in insight.
Tintri produces hybrid disk/flash SSD appliances optimized for virtual environments, not Symm-killers. They use SSDs in their products, as do other folks like Nimble Storage.
No money changed hands between Tintri and StorageMojo or related entities. My accountant is weeping in the next room.
Begin Tintri’s response:
Outside the SSD Box: More than Faster Disk
Robin Harris of Storage Mojo in his recent article, “Are SSD-based arrays a bad idea? and Matt Kixmoeller of Pure in his response, The SSD is Key to Economic Flash Arrays, present interesting perspectives on whether or not SSDs are the best technology for building flash-based arrays. Robin argues that by rethinking how flash can be packaged outside the SSD box, you can achieve better performance, reliability, cost and flexibility. And these observations are supported by the experience of existing flash-based storage vendors who have developed their own custom flash modules and packaging. Matt argues that SSDs provide an industry-standard product that requires less investment to leverage, better economies of scale, and rapid improvement in technology. These are also very valid points, especially for startups with limited time and capital.
Taking latency as a point for comparison, flash-based storage vendors using custom packaging often quote IO latencies in the tens of microseconds versus SSD latencies of low hundreds of microseconds. While this is a notable difference, software and interfaces can also add overhead and the final latency seen at the subsystem level may differ by only a factor of two to four. Server-side flash products can avoid more of the software and interface overhead and provide better latencies – but may require rewriting applications to capitalize on this advantage. Keep in mind that hard disk latencies can easily reach tens of milliseconds under even moderate load. ALL of these flash-based products have latencies that are hundreds of times faster than disk.
In short, most of the performance improvement comes from simply replacing hard disk with some form of flash. This immediately shifts the performance bottleneck from storage to some other component in your system. As a result, you won’t be able to take full advantage of flash performance without also optimizing the performance of the rest of your infrastructure, and ultimately rewriting your applications as well.
The above phenomenon explains why replacing your hard disk with flash often speeds up your applications by only a factor of two to three rather than ten or a hundred. Congratulations! You’ve just moved the bottleneck from storage to some other component of your system. By Amdahl’s Law, further improving only storage performance has diminishing returns. So while custom packaging does provide significant advantages in latency, most applications are unlikely to benefit until the rest of the computing ecosystem is optimized to take full advantage of flash.
To take a closer look at SSD latencies, I ran the following simple experiment:
1) Erase an MLC SSD so that no logical blocks were actually mapped to flash, and then issue small random reads.
2) Overwrite the entire SSD so that all logical blocks are mapped, and issue the same small random reads in step 1.
The idea here is to measure the software and protocol overheads of accessing flash packaged as SSD separately from accessing the data on the SSD. Reads with no blocks mapped had latencies of around 70us, while the reads with all blocks mapped had latencies of 250us. In this case only a fraction of the overall IO latency was due to SW and protocol overhead, indicating that SSDs may still have significant room for improving latency.
Another important issue discussed by both Robin and Matt is the relative cost of flash packaged in SSD versus non-SSD form factors. Robin argues that an SSD costs significantly more $/GB than the underlying flash while Matt argues that non-SSD packaging is expensive to develop, and SSDs provide useful flash management functions as well as hot-swap capability. It’s certainly true that developing custom packaging has a high up front cost, although this is likely balanced by lower unit costs. But as Robin points out, there are also standard packaging options available for non-SSD form factor flash, which may make custom packaging for non-SSD flash unnecessary.
A very important point to keep in mind when thinking about commercially available SSD vs. non-SSD form factors is that SSDs are designed as a substitute for disk, while non-SSD form factors are often designed as substitutes for memory. This means that SSDs focus primarily on reducing $/GB (its greatest weakness vs. disk), while non-SSDs focus on reducing $/IOPS (its greatest weakness vs. DRAM). This explains why SSD is currently much cheaper on a $/GB basis than PCIe flash, while PCIe flash designed as memory expansion is cheaper on a $/IOPS basis than SSD. This is not to say that you can’t build a non-SSD form factor that has lower $/GB than SSD, just that the primary applications for these non-SSD form factors today is usually not as a replacement for disk.
Whether flash in SSD versus non-SSD form factors is better for use in storage subsystems in the long run primarily depends on the relative volumes of these products, and the feature and price sensitivity of the applications these products serve. At this point the ‘winning’ form-factor seems hard to predict. So as a flash subsystem vendor, it seems desirable to keep your options open and ensure that your technology will work well with a variety of packaging options.
More than just a faster disk
But flash is about more than just performance and packing. Flash enables much more than just a faster, denser replacement for disk. With flash, we can finally remove a key mechanical barrier to scaling not only storage systems, but computing systems in general. Going forward, CPU, network and storage can now all scale with improvements in semiconductor technology. When transistors replaced vacuum tubes, we got more than just compact radios; we got simpler, more powerful computing systems. Similarly, flash is a catalyst that will enable far greater levels of automation and functionality for storage and computing systems than is possible today.
I tend to think of the value of new technology as the product of its simplicity times the functionality it offers. It’s clear why functionality is important, but why is simplicity so important? Technology that is simple to use will be used more often, to solve more problems, in less time. As a result, simplicity has a compounding effect on value:
Value = Simplicity * Functionality
How does one measure simplicity? One way is to list the basic steps it takes to perform a task and how long each step takes. One to three is good, four to six is manageable, and anything resembling a twelve step program will likely require written directions and a significant amount of focus. Note that in assessing the simplicity and functionality of a technology, one must do it in the context of the job that needs to be done. For example, a chainsaw has great features for cutting down trees but not for giving haircuts.
Read on here