Hail Hydra!

The last of the Storage Field Day 6 on November 7th took me and the other delegates to NEC. There was an obvious, yet eerie silence among everyone about this visit. NEC? Are you kidding me?

NEC isn’t exactly THE exciting storage company in the Silicon Valley, yet I was pleasantly surprised with their HydraStorprowess. It is indeed quite a beast, with published numbers of backup throughput of 4PB/hour, and scales to 100PB of capacity. Most impressive indeed, and HydraStor deserves this blogger’s honourable architectural dissection.

HydraStor is NEC’s grid-based, scale-out storage platform with an object storage backend. The technology, powered by the DynamicStor ™ software, a distributed file system laid over the HydraStor grid architecture. At the same time, it has the DataRedux™ technology that provides the global in-line deduplication as the HydraStor ingests data for data protection, replication, archiving and WORM purposes. It is a massive data consolidation platform, storing gazillion loads of data (100PB you say?) for short-term and long-term retention and recovery.

The architecture is indeed solid, and its data availability goes beyond traditional RAID-level resiliency. HydraStor employs their proprietary erasure coding, called Distributed Resilient Data™. The resiliency knob can be configured to withstand 6 concurrent disks or nodes failure, but by default configured with a resiliency level of 3.

We can quickly deduce that DynamicStor™, DataRedux™ and Distributed Resilient Data™ are the technology pillars of HydraStor. How do they work, and how do they work together?

Let’s look a bit deeper into the HydraStor architecture.

HydraStor is made up of 2 types of nodes:

  • Accelerator Nodes
  • Storage Nodes

The Accelerator Nodes (AN) are the access nodes. They interface with the HydraStor front end, which could be CIFS, NFS or OST (Open Storage Technology). The AN nodes chunks the in-coming data and performs in-line deduplication at a very high speed. It can reach speed of 300TB/hour, which is blazingly fast!

The AN nodes also runs DynamicStor™, handling the performance heavy-lifting portion of HydraStor. The chunked data from the AN nodes are then passed on to the Storage Nodes (SN), where they are further “deduped in-line” to determined if the chunks are unique or not. It is a two-step inline deduplication process. Below is a diagram showing the ANs built above the SNs in the HydraStor grid architecture.

NEC AN & SN grid architecture

 

The HydraStor grid architecture is also a very scalable architecture, allow the dynamic scale-in and scale-out of both ANs and SNs. AN nodes and SN nodes can be added or removed into the system, auto-configuring and auto-optimizing while everything stays online. This capability further strengthens the reliability and the resiliency of the HydraStor.

NEC Hydrastor dynamic topology

Moving on to DataRedux™. DataRedux™ is HydraStor’s global in-line data deduplication technology. It performs dedupe at the sub-file level, with variable length window. This is performed at the AN nodes and the SN nodes level,chunking and creating unique hash values. All unique chunks are further compressed with a modified LZ compression algorithm, shrinking the data to its optimized footprint on the disk storage. To maintain the global in-line deduplication, the hash table is available across the HydraStor cluster.

NEC Deduplication & Compression

The unique data chunk resulting from deduplication and compression are then written to disks using the configured Distributed Resilient Data™ (DRD) algorithm, at its set resiliency level.

At the junction of DRD, with erasure coding parity, the data is broken up into multiples of fragments and assigned a parity to a grouping of fragments. If the resiliency level is set to 3 (the default), the data is broken into 12 pieces, 9 data fragments + 3 parity fragments. The 3 parity fragments corresponds to the resiliency level of 3. See diagram below of the 12 fragments spread across a group of selected disks in the storage pool of the Storage Nodes.

NEC DRD erasure coding on Storage Nodes

 

If the HydraStor experiences a failure in the disks or nodes, and has resulted in the loss of a fragment or fragments, the DRD self-healing function will auto-rebuild and auto-reconfigure the recovered fragments in another set of disks, maintaining the level of 3 parities.

The resiliency level, as mentioned earlier, can be set up to 6, boosting the HydraStor survival factor of 6 disks or nodes failure in the grid. See below of how the autonomous DRD recovery works:

NEC Autonomous Data recovery

Despite lacking the razzle dazzle of most Silicon Valley storage startups and upstarts, credit be given where credit is due. NEC HydraStor is indeed a strong show stopper.

However, in a market that is as fickle as storage, deduplication solutions such as HydraStor, EMC Data Domain, and HP StoreOnce, are being superceded by Copy Data Management technology, touted by Actifio. It was rumoured that EMC restructured their entire BURA (Backup Recovery Archive) division to DPAD (Data Protection and Availability Division) to go after the burgeoning copy data management market.

It would be good if NEC can take notice and turn their HydraStor “supertanker” towards the Copy Data Management market. That would be something special to savour.

P/S: NEC. Sorry about the title. I just couldn’t resist it 😉

MASSive, Impressive, Agile, TEGILE

Ah, my first blog after Storage Field Day 6!

It was a fantastic week and I only got to fathom the sensations and effects of the trip after my return from San Jose, California last week. Many thanks to Stephen Foskett (@sfoskett), Tom Hollingsworth (@networkingnerd) and Claire Chaplais (@cchaplais) of Gestalt IT for inviting me over for that wonderful trip 2 weeks’ ago. Tegile was one of the companies I had the privilege to visit and savour.

In a world of utterly confusing messaging about Flash Storage, I was eager to find out what makes Tegile tick at the Storage Field Day session. Yes, I loved Tegile and the campus visit was very nice. I was also very impressed that they have more than 700 customers and over a thousand systems shipped, all within 2 years since they came out of stealth in 2012. However, I was more interested in the essence of Tegile and what makes them stand out.

I have been a long time admirer of ZFS (Zettabyte File System). I have been a practitioner myself and I also studied the file system architecture and data structure some years back, when NetApp and Sun were involved in a lawsuit. A lot of have changed since then and I am very pleased to see Tegile doing great things with ZFS.

Tegile’s architecture is called IntelliFlash. Here’s a look at the overview of the IntelliFlash architecture:

Tegile IntelliFlash Architecture

So, what stands out for Tegile? I deduce that there are 3 important technology components that defines Tegile IntelliFlash ™ Operating System.

  • MASS (Metadata Accelerator Storage System)
  • Media Management
  • Inline Compression and Inline Deduplication

What is MASS? Tegile has patented MASS as an architecture that allows optimized data path to the file system metadata.

Often a typical file system metadata are stored together with the data. This results in a less optimized data access because both the data and metadata are given the same priority. However, Tegile’s MASS writes and stores the filesystem metadata in very high speed, low latency DRAM and Flash SSD. The filesystem metadata probably includes some very fine grained and intimate details about the mapping of blocks and pages to the respective capacity Flash SSDs and the mechanical HDDs. (Note: I made an educated guess here and I would be happy if someone corrected me)

Going a bit deeper, the DRAM in the Tegile hybrid storage array is used as a L1 Read Cache, while Flash SSDs are used as a L2 Read and Write Cache. Tegile takes further consideration that the Flash SSDs used for this caching purpose are different from the denser and higher capacity Flash SSDs used for storing data. These Flash SSDs for caching are obviously the faster, lower latency type of eMLCs and in the future, might be replaced by PCIe Flash optimized by NVMe.

Tegile DRAM-Flash Caching

This approach gives absolute priority, and near-instant access to the filesystem’s metadata, making the Tegile data access incredibly fast and efficient.

Tegile’s Media Management capabilities excite me. This is because it treats every single Flash SSD in the storage array with very precise organization of 3 types of data patterns.

  1. Write caching, which is high I/O is focused on a small segment of the drive
  2. Metadata caching, which has both Read and Write I/O  is targeted to a slight larger segment of the drive
  3. Data is laid out on the rest of the capacity of the drive

Drilling deeper, the write caching (in item 1 above) high I/O writes are targeted at the drive segment’s range which is over-provisioned for greater efficiency and care. At the same time, the garbage collection(GC) of this segment is handled by the respective drive’s controller. This is important because the controller will be performing the GC function without inducing unnecessary latency to the storage array processing cycles, giving further boost to Tegile’s already awesome prowess.

In addition to that, IntelliFlash ™ aligns every block and every page exactly to each segment and each page boundary of the drives. This reduces block and page segmentation, and thereby reduces issues with file locality and free blocks locality. It also automatically adjust its block and page alignments to different drive types and models. Therefore, I believe, it would know how to align itself to a 512-bytes or a 520-bytes sector drives.

The Media Management function also has advanced cell care. The wear-leveling takes on a newer level of advancement where how the efficient organization of blocks and pages to the drives reduces additional and often unnecessary erase and rewrites. Furthermore, the use of Inline Compression and Inline Deduplication also reduces the number of writes to drives media, increasing their longevity.

Tegile Inline Compression and Deduplication

Compression and deduplication are 2 very important technology features in almost all flash arrays. Likewise, these 2 technologies are crucial in the performance of Tegile storage systems. They are both inline i.e – Inline Compression and Inline Deduplication, and therefore both are boosted by the multi-core CPUs as well as the fast DRAM memory.

I don’t have the secret sauce formula of how Tegile designed their inline compression and deduplication. But there’s a very good article of how Tegile viewed their method of data reduction for compression and deduplication. Check out their blog here.

The metadata of data access of each and every customer is probably feeding into their Intellicare, a cloud-based customer care program. Intellicare is another a strong differentiator in Tegile’s offering.

Oh, did I mentioned they are unified storage as well with both SAN and NAS, including SMB 3.0 support?

I left Tegile that afternoon on November 5th feeling happy. I was pleased to catch up with Narayan Venkat, my old friend from NetApp, who is now their Chief Marketing Officer. I was equally pleased to see Tegile advancing ZFS further than the others I have known. With so much technological advancement and more coming, the world is their oyster.