The big boys better be flash friendly

An interesting article came up in the news this week. The article, from the ever popular The Register, mentioned 3 up and rising storage stars, Nimble Storage, Tintri and Tegile, and their assault on a flash strategy “blind spot” of the big boys, notably EMC and NetApp.

I have known about Nimble Storage and Tintri for a couple of years now, and I did take some time to read up on their storage technology offering. Tegile is new to me when it appeared on my radar after SearchStorage.com announced as the Gold Winner of the enterprise storage category for 2012.

The Register article intriqued me because it implied that these traditional storage vendors such as EMC and NetApp are probably doing a “band-aid” when putting together their flash storage strategy. And typically, I see these strategic concepts introduced by these 2 vendors:

  1. Have a server-side cache strategy by putting a PCIe card on the hosting server
  2. Have a network-based all-flash caching area
  3. Have a PCIe-based flash card on the storage system
  4. Have solid state drives (SSDs) in its disk shelves enclosures

In (1), EMC has VFCache (the server side caching software has been renamed to XtremSW Cache and under repackaging with the Xtrem brand name) and NetApp has it FlashAccel solution. Previously, as I was informed, FlashAccel was using the FusionIO ioTurbine solution but just days ago, NetApp expanded the LSI Nytro WarpDrive into its FlashAccel solution as well. The main objective of a server-side caching strategy using flash is to accelerate mostly read-based I/O operations for specific application workloads at the server side.

Continue reading

NFO for DFR

It has not caused severe pain yet but it will. Storage is cheap but as capacity grows, it will eventually hit a limit that makes storage difficult to maintain from a cost perspective.

I wrote about the lack of attention of primary storage deduplication solutions in the local industry. Perhaps deduplication has matured to a point that it has become a no-brainer or perhaps customers are already getting sick and tired of the word “dedupe”. Either way, we should not be distracted from the fact that data footprint reduction (DFR) in a generic sense or storage efficiency as a fancy marketing term, must be applied somewhere to slow down the purchase of storage capacity.

Storage is getting fatter, and storage vendors’ revenue is getting fatter along with it. While this is good for the pockets of vendors, the customers have to face higher costs associated with

  • Power, Cooling and Floorspace
  • Administration and management
  • Bandwidth
  • Resource utilization

All these are not prudent storage management practices, because fat storage is bad, just like human beings getting fatter. Similarly, storage must go on a diet and deduplication is one of the few solutions out there. However, I have spoken out that deduplication is just shrinking the container that holds the bits of data, completely unaware of what the content is. Deduplication does not shrink the data itself, and if the occurrence of the data is high, deduplication does not help in reducing the storage capacity. There is no advantage unless the data footprint reduction (DFR) technology is content aware. (Note that I am using DFS as a generic term rather than data deduplication. The reason is obvious.)

That is why data deduplication technologies does not work well with seismic files or encoded video files, because the files are already highly optimized. But there is a technology that can look deeper into such unstructured files and produce storage capacity reduction with specific algorithms for specific type of files and file objects. That technology, I believe, is the truest form of data footprint reduction and it is called Native Format Optimization (NFO).

I want to relate an old story I had experienced when I brought an EMC BURA (Back Up Recovery & Archive – a precursor to its present BRS division) senior manager to see a highly respected technical manager in Schlumberger in Malaysia a few years back. Schlumberger is the world’s largest oilfield services company and provides seismic analysis and interpretation software and seismic files are highly encoded and compressed.

As usual, the senior manager being a typical sales guy started blabbering how great Data Domain (this was just after the EMC acquisition) was, and how it can dedupe any kind of files giving 20:1 (exaggerated to 500:1 to certain text files), even for seismic files. I was signalling to the EMC senior manager to stop his bullsh*t, but he went on and on. In the end, the Schlumberger technical manager politely told the EMC senior manager to shut up, because he has little understand of what seismic files are like.

Now, back to Native Format Optimization (NFO) technology. In a nutshell, NFO plays trick with our human visual system. The goal is to reduce the size of unstructured files without reducing the visual quality of the images (text, texture, colour, resolution, depth, hue, contrast, etc) of the files. 

Have a look at these 2 files. One is optimized with NFO and one is un-optimized. Can you tell the difference?

 

The human visual system is known to be:

  • Less sensitive to high frequency of colour variation
  • More sensitive to brightness than colour variation
  • Less sensitive to background colour in lower resolution
  • More sensitive to a picture’s motion than picture’s texture

Therefore, the eyes perceive an image based on mostly the lowest quality baseline. I got this information from George Crump’s Storage Switzerland’s article.

Because NFO is already in its native form, the files does not need to be rehydrated like deduped files.

The capacity reduction savings is tremendous and because NFO approach is content aware, the benefits translates to higher cost savings in

  • Reduction of power, cooling and floorspace
  • Reduction in data management and administration tasks, especially backup
  • Improved bandwidth and improved disaster recovery
  • Higher performance
  • Delayed storage capacity purchase
  • many more

After Ocarina acquisition by Dell in 2010, a search on the web revealed that probably only one vendor in Europe has boldly continued to enhance NFO technology in their products. The company is balesio and you can read about their NFO technology here.

 

Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications  will have implications on the brand of storage they will be locked in into.

And surprise, surprise, SSDs are the pawns of this new cloak-and-dagger game. How? Well, I have been observing this for quite a while now, and when HP announced their SMART portfolio for their storage, it’s time for me to say something.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.

It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

Similarly, Dell announced their ExpressFlash solution that ties up its 12th generation PowerEdge servers with their flagship (what else), Dell Compellent storage.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

 

Storage must go on a diet

Nowadays, the capacity of the hard disk drives (HDDs) are really big. 3TB is out and 4TB is in the horizon. What’s next?

For small-medium businesses in Malaysia, depending on their data requirements and applications, 3-10TB is pretty sufficient  and with room to grow as well. Therefore, a 6TB requirement can be easily satisfied with 2 x 3TB HDDs.

If I were the customer, why would I buy a storage array, with the software licenses and other stuff that will not only increase my cost of equipment acquisition and data management, it will also increase the complexity of my IT infrastructure? I could just slot HDDs into my existing server, RAID it with RAID-0 (not a good idea but to save costs, most customers would do that) and I have a 6TB volume! It’s cheaper, easier to manage with Windows or Linux, and my system administrator doesn’t have to fuss about lack of storage experience.

And RAID isn’t really keeping up with the tremendous growth of HDD’s capacity as well. In fact, RAID is at risk. RAID (especially RAID 5/6) just cannot continue provide the LUN or volume reliability and data availability because it just takes too damn long to rebuild the volume after the failure of a disk.

Back in the days where HDDs were less than 500GB, RAID-5 would still hold up but after passing the 1TB mark, RAID-6 became more prevalent. But now, that 1TB has ballooned to 3TB and RAID-6 is on shaky ground. What’s next? RAID-7? ZFS has RAID-Z3, triple parity but come on, how many vendors have that? With triple parity or stronger RAID (is there one?), the price of the storage array is going to get too costly.

Experts have been speaking about parity-declustering,  but that’s something that a few vendors have right now. Panasas, founded by one of forefathers of RAID, Garth Gibson, comes to mind. In fact, Garth Gibson and Mark Holland of Cargenie-Mellon University’s Parallel Data Lab (PDL) presented a paper about parity-declustering more than 10 years ago.

Let’s get back to our storage fatty. Yes, our storage is getting fat, obese, rotund or whatever you want to call it. And storage vendors have been pushing a concept in hope that storage administrators and customers can take advantage of it. It is called Storage Optimization or Storage Efficiency.

Here are a few ways you can consider to put your storage on a diet.

  • Compression
  • Thin Provisioning
  • Deduplication
  • Storage Tiering
  • Tapes and SSDs

To me, compression has not taken the storage world by storm. But then again, there aren’t many vendors that tout compression as a feature for storage optimization. Most of them rather prefer to push the darling of data reduction, data deduplication, as the main feature for save more space. Theoretically, data deduplication makes more sense when the data is inactive, and has high occurrence of duplicated data. That is why secondary storage such  as backup deduplication targets like Data Domain, HP StoreOnce, Quantum DXi can publish 20:1 rates and over time, that rate can get even higher.

NetApp also has been pushing their A-SIS data deduplication on primary storage. Yes, it helps with the storage savings in primary but when the need for higher data transfer rates and time to access “manipulated” data (deduped or compressed), it is likely that compression is a better choice for primary, active data.

So who has compression? NetApp ONTAP 8.0.1 has compression now and IBM with its Storewize V7000 started as a compression device. Read about IBM Storewize in my blog here. Dell has Ocarina Networks, which was recently unleashed. I am a big fan of Ocarina Networks and I wrote about the technology in my previous blog. EMC, during the Celerra days of DART has compression but I don’t hear much about it in their VNX. Compression is there, believe me, embedded all the loads of EMC marketing.

Thin Provisioning is now a must-have and standard feature of all storage vendors. What is Thin Provisioning? The diagram below shows you:

In the past, storage systems aren’t so intelligent. You ask for 10TB, you are given 10TB and that 10TB is “deducted” from the storage capacity. That leads to wastage and storage inefficiencies. Today, Thin Provisioning will give you 10TB but storage capacity is consumed as it is being used. The capacity is not pre-allocated as in the past. Thin provisioning is a great diet pill for bloated storage projects. 

Another up and coming feature is storage tiering. Storage tiering, when associated to storage optimization, should include hierarchical storage management (HSM) and tape-out as well. Storage optimization solutions should not offer only in the storage array itself. Storage tiering within the storage array is available with most vendors – IBM EasyTier, EMC FAST2, Dell Fluid Data Management and many others. But what about data being moved out of the storage array? What about reducing the capacity of the data online or near-line? Why not put them offline if there isn’t a need for it?

I term this as Active Archiving, something I learned while I was at EMC. Here’s a look at EMC’s style of Active Archiving:

Active Archiving promotes the concept of data archiving and is not unique only to EMC. Almost all storage vendors, either natively or with 3rd party vendors, can perform fairly efficient data archiving in one way or another. One of the software that I liked (and not unique!) is Quantum Stornext. Here’s a video of how Quantum Stornext helps reduce the fat of the storage.

With the single-copy sharing feature of Quantum Stornext to multiple disparate OSes, there are lesser duplicate files in storage as well.

Tapes have been getting a bad name in the past few years. It has been repositioned and repurposed as an archive medium rather than a backup medium. But tape is the greenest and most powerful storage diet pill around. And we should not be discount tapes because tapes are fighting back. Pretty soon you will be hearing about Linear Tape File System (LTFS). In a nutshell, Linear Tape File System (LTFS) allows you to use the tape almost as if it were a hard disk. You can drag and drop files from your server to the tape, see the list of saved files using a standard operating system directory (no backup software catalog needed), and use point and click to restore. How cool is that!

And Solid State Drives (SSDs) makes sense as well.

There are times that we need IOPS and using spinning drives, we have to set up many disk spindles to achieve the IOPS that we want.  For example, using the diagram below from the godfather of storage, Greg Schulz,

The set of 16 spinning HDD drives on the left can only deliver 3,520 IOPS. The problem is, we have wasted a lot of disk space, as seen in the diagram below. This design, which most customer would be accustomed to, may look cheaper but in actual fact, is NOT.

If the price of a Fibre Channel HDD is RM2,000, the total of 16 would make up RM32,000.00. That is not inclusive of additional power and cooling and rack space and also the data management costs. Assuming the SSDs costs 5 times more than the Fibre Channel HDD. SSDs are capable of delivering very high IOPS. Here I am putting a modest 5,000 IOPS per SSDs. With just 2 SSDs (as the right design suggests), the total costs is only RM20,000. It has greater performance room to grow, and also savings in data management, power and cooling.

Folks, consider SSDs as part of your storage diet plan.

All these features are available, in whole or in part, and they are part of the storage technology offerings that is out there. With all these being said, are you doing something about it? Get off your lazy bum and start managing your storage and put your storage on a diet!!!

Ocarina rising

After more than a year since Dell acquired Ocarina Networks, it has finally surfaced last week in the form of Dell DX Object Storage 6000G SCN (Storage Compression Node).

Ocarina is a content-aware storage optimization engine, and their solution is one of the best I have seen out there. Its unique ECOsystem technology, as described in the diagram below, is impressive.

Unlike most deduplication and compression solutions out there, Ocarina Networks solution takes storage optimization a step further.  Ocarina works at the file level and given the rise and crazy, crazy growth of unstructured files in the NAS space, the web and the clouds, storage optimization is one priority that has to be addressed immediately. It takes a 3-step process – Extract, Correlate and Optimize.

Today’s files are no longer a flat structure of a single object but more of a compounded file where many objects are amalgamated from different sources. Microsoft Office is a perfect example of this. An Excel file would consists of objects from Windows Metafile Formats, XML objects, OLE (Object Linking and Embedding) Compound Storage Objects and so on. (Note: That’s just Microsoft way of retaining monopolistic control). Similarly, a web page is a compound of XML, HTML, Flash, ASP, PHP object codes.

In Step 1, the technology takes files and breaks it down to its basic components. It is kind of like breaking apart every part of a car down to its nuts and bolt and layout every bit on the gravel porch. That is the “Extraction” process and it decodes each file to get the fundamental components of the files.

Once the compounded file object is “extracted”, identified and indexed, each fundamental object is Correlated in Step 2. The correlation is executed with the file and across files under the purview of Ocarina. Matching and duplicated objects are flagged and deduplicated. The deduplication is done at the byte-level, unlike most deduplication solutions that operate at the block-level. This deeper and more granular approach further reduces the capacity of the storage required, making Ocarina one of the most efficient storage optimization solutions currently available. That is why Ocarina can efficiently reduce the size of even zipped and highly encoded files.

It takes this storage optimization even further in Step 3. It applies content-aware compactors for each fundamental object type, uniquely compressing each object further. That means that there are specialized compactors for PDF objects, ZIP objects and so on. They even have compactors for Oil & Gas seismic files. At the time I was exposed to Ocarina Networks and evaluating it, it had about 600+ unique compactors.

After Dell bought Ocarina in July 2010, the whole Ocarina went into a stealth mode. Many already predicted that the Ocarina technology would be integrated and embedded into Dell’s primary storage solutions of Compellent and EqualLogic. It is not there yet, but will likely be soon.

Meanwhile, the first glimpse of Ocarina will be integrated as a gateway solution to Dell DX6000 Object Storage. DX Object Storage is a technology which Dell has OEMed from Caringo. DX6000 Object Storage (I did not read in depth) has the concept of the old EMC Centera, but with a much newer, and more approach based on XML and HTTP REST. It has published an open API and Dell is getting ISV partners to develop their applications to interact with the DX6000 including Commvault, EMC, Symantec, StoredIQ are some of the ISV partners working closely with Dell.

(24/10/2011: Editor note: Previously I associated Dell DX6000 Object Storage with Exanet. I was wrong and I would like to thank Jim Dtuton of Caringo for pointing out my mistake)

Ocarina’s first mission is to reduce the big, big capacities in Big Data space of the DX6000 Object Storage, and the Ocarina ECOsystem technology looks a good bet for Dell as a key technology differentiator.