Do we still need FAST (and its cohorts)?

In a recent conversation with an iXsystems™ reseller in Hong Kong, the topic of Storage Tiering was brought up. We went about our banter and I brought up the inter-array tiering and the intra-array tiering piece.

After that conversation, I started thinking a lot about intra-array tiering, where data blocks within the storage array were moved between fast and slow storage media. The general policy was simple. Find all the least frequently access blocks and move them from a fast tier like the SSD tier, to a slower tier like the spinning drives with different RPM speeds. And then promote the data blocks to the faster media when accessed frequently. Of course, there were other variables in the mix besides storage media and speeds.

My mind raced back 10 years or more to my first encounter with Compellent and 3PAR. Both were still independent companies then, and I had my first taste of intra-array tiering

The original Compellent and 3PAR logos

I couldn’t recall which encounter I had first, but I remembered the time of both events were close. I was at Impact Business Solutions in their office listening to their Compellent pitch. The Kuching boys (thank you Chyr and Winston!) were very passionate in evangelizing the Compellent Data Progression technology.

At about the same time, I was invited by PTC Singapore GM at the time, Ken Chua to grace their new Malaysian office and listen to their latest storage vendor partnership, 3PAR. I have known Ken through my NetApp® days, and he linked me up Nathan Boeger, 3PAR’s pre-sales consultant. 3PAR had their Adaptive Optimization (AO) disk tiering and Dynamic Optimization (DO) technology.

Both sounded the same to me because they were designed to serve the same set of objectives, which was I/O balancing. 3PAR was very Unix-like because the founders were from Sun, while Compellent had a gentler, more intuitive UI. Personally I liked Compellent more just because their Storage Center UI was very soothing.

All this happened before the bidding wars where HP® beat out Dell® to acquire 3PAR, and Dell® bought Compellent shortly after.

Who else?

I wrote a piece on Storage Tiering back in 2011. Automated Storage Tiering was all the rage then. EMC® had FAST™ (Fully Automated Storage Tier) VP and IBM® had Easy Tier®. I couldn’t remember which other storage vendors had tiering but I am sure there were a few more.

Dell Compellent Data Progression in action

3PAR AO and DO in action

EMC FAST VP tiering layout

Is Storage Tiering still a thing?

If we take the definition of Storage Tiering of yonder years, is the concept and the technology still a thing now? Has it become the “old wine in a new bottle”? Let us explore a few views and opinions.

For one, the storage media has changed. Back then, they were SLC (single level cell), MLC (multi level cell), 10K and 15K RPM drives, and the NL-SAS 7.2K RPM. There was a sizeable disparity in $/GB of each medium, and storage tiering made prudent sense.

Today, only NL-SAS drives remained in the market with TLC (triple level cell) SSDs dominate the storage media solid state landscape. QLC (quad level cell) SSDs are on the rise. Adding to that is the storage class memory (SCM) media such as Intel® Optane™. Often published by Intel®, there is a new storage media hierarchy/tier in town (shown below).

New Storage Media Hierarchy 2021 version

Pricing variables of  the new storage media within a storage array such as $/GB, and Watt/GB have certainly dropped significantly while IOPS/GB has shot upwards. All these are good, really good, but the days of fervent touting of storage tiering have vapourized.

The number of storage tiers has reduced from 4 or 5 to just 2, because the storage class memory as defined by Intel® is persistent memory which is byte addressable than block addressable. Tiering between bytes and blocks is too much work to eke out the tiering advantage. Caching, rather than tiering made more sense, and modern day storage OSes and file systems take advantage of the speed of near-DRAM and read/write intensive SSDs to accelerate I/O workloads.

These arguments along with the several variables seem to have made storage tiering obsolete, for now. The storage tiering pendulum may swing back again in the future, in a different shape and form.

Tagged , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 25+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently run a small system integration and consulting company focusing on storage and cloud solutions, with occasional consulting work on high performance computing (HPC).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.