Considerations of Hadoop in the Enterprise

I am guilty. I have not been tendering this blog for quite a while now, but it feels good to be back. What have I been doing? Since leaving NetApp 2 months or so ago, I have been active in the scenes again. This time I am more aligned towards data analytics and its burgeoning impact on the storage networking segment.

I was intrigued by an article posted by a friend of mine in Facebook. The article (circa 2013) was titled “Never, ever do this to Hadoop”. It described the author’s gripe with the SAN bigots. I have encountered storage professionals who throw in the SAN solution every time, because that was all they know. NAS, to them, was like that old relative smelled of camphor oil and they avoid NAS like a plague. Similar DAS was frowned upon but how things have changed. The pendulum has swung back to DAS and new market segments such as VSANs and Hyper Converged platforms have been dominating the scene in the past 2 years. I highlighted this in my blog, “Praying to the Hypervisor God” almost 2 years ago.

I agree with the author, Andrew C. Oliver. The “locality” of resources is central to Hadoop’s performance.

Consider these 2 models:

moving-compute-storage

In the model on your left (Moving Data to Compute), the delivery process from Storage to Compute is HEAVY. That is because data has dependencies; data has gravity. However, if you consider the model on your right (Moving Compute to Data), delivering data processing to the storage layer is much lighter. Compute or data processing is transient, and the data in the compute layer is volatile. Once compute’s power is turned off, everything starts again from a clean slate, hence the volatile stage.

Continue reading

Discovery of the 8th element – Element R

I am so blind. After more than 20 years in the industry, I have chosen to be blind to one of the most important elements of data protection and availability. Yet, I have been talking about it over and over, and over again but never really incorporated it into mantra.

Some readers will know that I frequently use these 7 points (or elements) in my approach to storage infrastructure and information management. These are:

  • Availability
  • Performance
  • Protection
  • Accessibility
  • Management
  • Security
  • Compliance

A few days ago, I had an epiphany. I woke up in the morning, feeling so enlightened and yet conflicted with the dumbfounded dumb feeling. It was so weird, and that moment continued to play in my mind like a broken record. I had to let it out and hence I am writing this down now.

Element RRecovery, Resiliency, Restorability, Resumption. That’s the element which I “discovered“. I was positively stunned that I never incorporated such an important element in my mantra, until now. Continue reading

Really? Disk is Dead? From Violin?

A catchy email from one of the forums I subscribed to, caught my attention. It goes something like “…Grateful … Disk is Dead“. Here the blog from Kevin Doherty, a Senior Account Manager at Violin Memory.

Coming from Violin Memory, this is pretty obvious because they have an agenda against HDDs. They don’t use any disks at all …. in any form factor. They use VIMMs (Violin Inline Memory Modules), something no vendor in the industry use today.

violin-memory4

I recalled my blog in 2012, titled “Violin pulling the strings“. It came up here in South Asia with much fan fare, lots of razzmatazz and there was plenty of excitement. I was even invited to their product training at Ingram Micro in Singapore and met their early SE, Mike Thompson. Mike is still there I believe, but the EMC veteran in Singapore whom I mentioned in my previous blog, left almost a year later after joining. So was the ex-Sun, General Manager of Violin Memory in Singapore.

Continue reading

Hail Hydra!

The last of the Storage Field Day 6 on November 7th took me and the other delegates to NEC. There was an obvious, yet eerie silence among everyone about this visit. NEC? Are you kidding me?

NEC isn’t exactly THE exciting storage company in the Silicon Valley, yet I was pleasantly surprised with their HydraStorprowess. It is indeed quite a beast, with published numbers of backup throughput of 4PB/hour, and scales to 100PB of capacity. Most impressive indeed, and HydraStor deserves this blogger’s honourable architectural dissection.

HydraStor is NEC’s grid-based, scale-out storage platform with an object storage backend. The technology, powered by the DynamicStor ™ software, a distributed file system laid over the HydraStor grid architecture. At the same time, it has the DataRedux™ technology that provides the global in-line deduplication as the HydraStor ingests data for data protection, replication, archiving and WORM purposes. It is a massive data consolidation platform, storing gazillion loads of data (100PB you say?) for short-term and long-term retention and recovery.

The architecture is indeed solid, and its data availability goes beyond traditional RAID-level resiliency. HydraStor employs their proprietary erasure coding, called Distributed Resilient Data™. The resiliency knob can be configured to withstand 6 concurrent disks or nodes failure, but by default configured with a resiliency level of 3.

We can quickly deduce that DynamicStor™, DataRedux™ and Distributed Resilient Data™ are the technology pillars of HydraStor. How do they work, and how do they work together?

Let’s look a bit deeper into the HydraStor architecture.

HydraStor is made up of 2 types of nodes:

  • Accelerator Nodes
  • Storage Nodes

The Accelerator Nodes (AN) are the access nodes. They interface with the HydraStor front end, which could be CIFS, NFS or OST (Open Storage Technology). The AN nodes chunks the in-coming data and performs in-line deduplication at a very high speed. It can reach speed of 300TB/hour, which is blazingly fast!

The AN nodes also runs DynamicStor™, handling the performance heavy-lifting portion of HydraStor. The chunked data from the AN nodes are then passed on to the Storage Nodes (SN), where they are further “deduped in-line” to determined if the chunks are unique or not. It is a two-step inline deduplication process. Below is a diagram showing the ANs built above the SNs in the HydraStor grid architecture.

NEC AN & SN grid architecture

 

The HydraStor grid architecture is also a very scalable architecture, allow the dynamic scale-in and scale-out of both ANs and SNs. AN nodes and SN nodes can be added or removed into the system, auto-configuring and auto-optimizing while everything stays online. This capability further strengthens the reliability and the resiliency of the HydraStor.

NEC Hydrastor dynamic topology

Moving on to DataRedux™. DataRedux™ is HydraStor’s global in-line data deduplication technology. It performs dedupe at the sub-file level, with variable length window. This is performed at the AN nodes and the SN nodes level,chunking and creating unique hash values. All unique chunks are further compressed with a modified LZ compression algorithm, shrinking the data to its optimized footprint on the disk storage. To maintain the global in-line deduplication, the hash table is available across the HydraStor cluster.

NEC Deduplication & Compression

The unique data chunk resulting from deduplication and compression are then written to disks using the configured Distributed Resilient Data™ (DRD) algorithm, at its set resiliency level.

At the junction of DRD, with erasure coding parity, the data is broken up into multiples of fragments and assigned a parity to a grouping of fragments. If the resiliency level is set to 3 (the default), the data is broken into 12 pieces, 9 data fragments + 3 parity fragments. The 3 parity fragments corresponds to the resiliency level of 3. See diagram below of the 12 fragments spread across a group of selected disks in the storage pool of the Storage Nodes.

NEC DRD erasure coding on Storage Nodes

 

If the HydraStor experiences a failure in the disks or nodes, and has resulted in the loss of a fragment or fragments, the DRD self-healing function will auto-rebuild and auto-reconfigure the recovered fragments in another set of disks, maintaining the level of 3 parities.

The resiliency level, as mentioned earlier, can be set up to 6, boosting the HydraStor survival factor of 6 disks or nodes failure in the grid. See below of how the autonomous DRD recovery works:

NEC Autonomous Data recovery

Despite lacking the razzle dazzle of most Silicon Valley storage startups and upstarts, credit be given where credit is due. NEC HydraStor is indeed a strong show stopper.

However, in a market that is as fickle as storage, deduplication solutions such as HydraStor, EMC Data Domain, and HP StoreOnce, are being superceded by Copy Data Management technology, touted by Actifio. It was rumoured that EMC restructured their entire BURA (Backup Recovery Archive) division to DPAD (Data Protection and Availability Division) to go after the burgeoning copy data management market.

It would be good if NEC can take notice and turn their HydraStor “supertanker” towards the Copy Data Management market. That would be something special to savour.

P/S: NEC. Sorry about the title. I just couldn’t resist it 😉

Can VSA help NetApp?

Almost a year ago, I had an interview with VMware Malaysia for a Senior SE position. They wanted a pre-sales guy who knows Oil & Gas and a strong technology background. I had a strong storage background, and I was involved in Oil & Gas upstream since my NetApp and EMC days.

I thought I was their guy having being led to believe (mostly by my own self-belief) to be so. I didn’t get the job but I did not find out the reason why I lost the opportunity. But I remembered well that I brashly mentioned to the Australian interviewer over the phone that VMware could become the next “storage technology” company. At that time, VMware just launched their VMware 5.0 and along with it, their vSphere Storage Appliance (VSA). This was a turning point of the virtual storage appliance space.

My friend, whose company is a VMware partner, said that the list price for the vSphere VSA was USD5,000.00 a pop. The price wasn’t too bad to the small-medium-enterprise businesses in Malaysia, minus the hardware and storage capacity cost. But what intrigued me back then was this virtual storage appliance concept was disruptive.

VMware could potentially take large JBOD farms, each for the minimum of 3 physical ESXi nodes and build a shared storage using the vSphere Storage Appliance (VSA). Who needs shared iSCSI or Fibre Channel LUNs anymore if VMware had its way?

But VMware still pretty much depended on their storage partners, especially its master, EMC and so I believe VMware held back pushing VSA for the reason of allowing its storage partner ecosystem to thrive. And for that reason, the vSphere Storage API such as VAAI and VASA were developed since vSphere 4 to enhance the deeper integration of these storage vendor’s technology into the VMware world.

But of course, long before the VMware’s VSA venture, HP LeftHand already had one on the cards. The LeftHand Virtual SAN Appliance (also VSA) was already getting rave comments from their partners and customers, impressed with how they were able to showcase HP LeftHand storage solution and technology brilliantly. Eventually, HP recognized the prowess of the LeftHand VSA and started marketing it as HP StoreVirtual VSA. I don’t hear much about the HP LeftHand (since has been renamed as P4000) VSA nowadays, seeing the HP guys in Malaysia preferring to pitch the physical storage than the virtual storage software.

NetApp, back in Q1 of 2012, also decided to go down the path of virtual storage appliance, announcing the ONTAP-v to the world here. It was initially resold through the Fujitsu partnership, but the Q1 announcement expands the ONTAP-v to a larger set of server vendors as shown below. The key component is to have a qualified RAID controller in each of the server vendors.

Continue reading

HUS VM is not virtual storage appliance

I was very confused with an recent HDS announcement, and it has been at the back of my mind for several weeks now.

On the last week of September 2012, HDS announced their Hitachi Unified Storage VM, aimed at small/medium enterprises (SMEs). Nothing wrong with that, except the VM part. I am not sure if it was the Computerworld author’s mistake, but he specifically mentioned VM as “virtual machine”. Check out the link here and the screenshot below:

It got me a bit riled up thinking this was some kind of virtual storage ala VMware Virtual Storage Appliance or NetApp ONTAP-V or even the early innovation of HP Lefthand Virtual SAN Appliance. Apparently not!

I did some short investigation and found Nigel Poulton’s blog which gave a fantastic dissection about the HUS VM. The VM is not virtual machine, but Virtual Midrange!

The HUS VM architecture is deep in ASICs, given HDS long history in ASICs design and manufacturing. SiliconFS, is the NAS front end, while the iSCSI and FC part are being serviced from the same HDS microcode of the higher end HDS VSP. Here’s a look at the hardware architectural diagram from Nigel’s blog:

There are plenty of bells and whistles in the HUS VM, armed with plenty of 8Gbps FC ports, SAS 6Gbps backend, SSDs, and software such as Dynamic Provisioning (thin provisioning) and Dynamic Tiering.

Continue reading

4TB disks – the end of RAID

Seriously? 4 freaking terabyte disk drives?

The enterprise SATA/SAS disks have just grown larger, up to 4TB now. Just a few days ago, Hitachi boasted the shipment of the first 4TB HDD, the 7,200 RPM Ultrastar™ 7K4000 Enterprise-Class Hard Drive.

And just weeks ago, Seagate touted their Heat-Assisted Magnetic Recording (HAMR) technology will bring forth the 6TB hard disk drives in the near future, and 60TB HDDs not far in the horizon. 60TB is a lot of capacity but a big, big nightmare for disks availability and data backup. My NetApp Malaysia friend joked that the RAID reconstruction of 60TB HDDs would probably finish by the time his daughter finishes college, and his daughter is still in primary school!.

But the joke reflects something very serious we are facing as the capacity of the HDDs is forever growing into something that could be unmanageable if the traditional implementation of RAID does not change to meet such monstrous capacity.

Yes, RAID has changed since 1988 as every vendor approaches RAID differently. NetApp was always about RAID-4 and later RAID-DP and I remembered the days when EMC had a RAID-S. There was even a vendor in the past who marketed RAID-7 but it was proprietary and wasn’t an industry standard. But fundamentally, RAID did not change in a revolutionary way and continued to withstand the ever ballooning capacities (and pressures) of the HDDs. RAID-6 was introduced when the first 1TB HDDs first came out, to address the risk of a possible second disk failure in a parity-based RAID like RAID-4 or RAID-5. But today, the 4TB HDDs could be the last straw that will break the camel’s back, or in this case, RAID’s back.

RAID-5 obviously is dead. Even RAID-6 might be considered insufficient now. Having a 3rd parity drive (3P) is an option and the only commercial technology that I know of which has 3 parity drives support is ZFS. But having 3P will cause additional overhead in performance and usable capacity. Will the fickle customer ever accept such inadequate factors?

Note that 3P is not RAID-7. RAID-7 is a trademark of a old company called Storage Computer Corporation and RAID-7 is not a standard definition of RAID.

One of the biggest concerns is rebuild times. If a 4TB HDD fails, the average rebuild speed could take days. The failure of a second HDD could up the rebuild times to a week or so … and there is vulnerability when the disks are being rebuilt.

There are a lot of talks about declustered RAID, and I think it is about time we learn about this RAID technology. At the same time, we should demand this technology before we even consider buying storage arrays with 4TB hard disk drives!

I have said this before. I am still trying to wrap my head around declustered RAID. So I invite the gurus on this matter to comment on this concept, but I am giving my understanding on the subject of declustered RAID.

Panasas‘ founder, Dr. Garth Gibson is one of the people who proposed RAID declustering way back in 1999. He is a true visionary.

One of the issues of traditional RAID today is that we still treat the hard disk component in a RAID domain as a whole device. Traditional RAID is designed to protect whole disks with block-level redundancy.  An array of disks is treated as a RAID group, or protection domain, that can tolerate one or more failures and still recover a failed disk by the redundancy encoded on other drives. The RAID recovery requires reading all the surviving blocks on the other disks in the RAID group to recompute blocks lost on the failed disk. In short, the recovery, in the event of a disk failure, is on the whole object and therefore, a entire 4TB HDD has to be recovered. This is not good.

The concept of RAID declustering is to break away from the whole device idea. Apply RAID at a more granular scale. IBM GPFS works with logical tracks and RAID is applied at the logical track level. Here’s an overview of how is compares to the traditional RAID:

The logical tracks are spread out algorithmically spread out across all physical HDDs and the RAID protection layer is applied at the track level, not at the HDD device level. So, when a disk actually fails, the RAID rebuild is applied at the track level. This significant improves the rebuild times of the failed device, and does not affect the performance of the entire RAID volume much. The diagram below shows the declustered RAID’s time and performance impact when compared to a traditional RAID:

While the IBM GPFS approach to declustered RAID is applied at a semi-device level, the future is leaning towards OSD. OSD or object storage device is the next generation of storage and I blogged about it some time back. Panasas is the leader when it comes to OSD and their radical approach to this is applying RAID at the object level. They call this Object RAID.

With object RAID, data protection occurs at the file-level. The Panasas system integrates the file system and data protection to provide novel, robust data protection for the file system.  Each file is divided into chunks that are stored in different objects on different storage devices (OSD).  File data is written into those container objects using a RAID algorithm to produce redundant data specific to that file.  If any object is damaged for whatever reason, the system can recompute the lost object(s) using redundant information in other objects that store the rest of the file.

The above was a quote from the blog of Brent Welch, Panasas’ Director of Software Architecture. As mentioned, the RAID protection of the objects in the OSD architecture in Panasas occurs at file-level, and the file or files constitute the object. Therefore, the recovery domain in Object RAID is at the file level, confining the risk and damage of data loss within the file level and not at the entire device level. Consequently, the speed of recovery is much, much faster, even for 4TB HDDs.

Reliability is the key objective here. Without reliability, there is no availability. Without availability, there is no performance factors to consider. Therefore, the system’s reliability is paramount when it comes to having the data protected. RAID has been the guardian all these years. It’s time to have a revolutionary approach to safeguard the reliability and ensure data availability.

So, how many vendors can claim they have declustered RAID?

Panasas is a big YES, and they apply their intelligence in large HPC (high performance computing) environments. Their technology is tried and tested. IBM GPFS is another. But where are the rest?

 

Chink in NetApp MetroCluster?

Ok, let me clear the air about the word “Chink” (before I get into trouble), which is not racially offensive unlike the news about ESPN having to fire 2 of their employees for using the word “Chink” on Jeremy Lin.  According to my dictionary (Collins COBUILD), chink is a very narrow crack or opening on a surface and I don’t really know the derogatory meaning of “chink” other than the one in my dictionary.

I have been doing a spot of work for a friend who has just recently proposed NetApp MetroCluster. When I was at NetApp many years ago, I did not have a chance to get to know more about the solution, but I do know of its capability. After 6 years away, coming back to do a bit of NetApp was fun for me, because I was always very comfortable with the NetApp technology. But NetApp MetroCluster, and in this opportunity, NetApp Fabric MetroCluster presented me an opportunity to get closer to the technology.

I have no doubt in my mind, this is one of the highest available storage solutions in the market, and NetApp is not modest about beating its own drums. It touts “No SPOF (Single Point of Failure“, and rightly so, because it has put in all the right plugs for all the points that can fail.

NetApp Fabric MetroCluster is a continuous availability solution that stretches over 100km. It is basically a NetApp Cluster with mirrored storage but with half of  its infrastructure mirror being linked very far apart, over Fibre Channel components and dark fiber. Here’s a diagram of how NetApp Fabric Metrocluster works for a VMware FT (Fault Tolerant) environment.

There’s a lot of simplicity in the design, because when I started explaining it to the prospect, I was amazed how easy it was to articulate about it, without all the fancy technical jargons or fuzz. I just said … “imagine a typical cluster, with an interconnect heartbeat, and the storage are mirrored. Then imagine the 2 halves are being pulled very far apart … That’s NetApp Fabric MetroCluster”. It was simply blissful.

But then there were a lot of FUDs (fear, uncertainty, doubt) thrown in by the competitor, feeding the prospect with plenty of ammunition. Yes, I agree with some of the limitations, such as no SATA support for now. But then again, there is no perfect storage solution. In fact, Chris Mellor of The Register wrote about God’s box, the perfect storage, but to get to that level, be prepared to spend lots and lots of money! Furthermore, once you fix one limitation or bottleneck in one part of the storage, it introduces a few more challenges here and there. It’s never ending!

Side note: The conversation triggered the team to check with NetApp for SATA support in Fabric MetroCluster. Yes, it is already supported in ONTAP 8.1 and the present version is 8.1RC3. Yes, SATA support will be here soon. 

More FUDs as we went along and when I was doing my research, some HP storage guys on the web were hitting at NetApp MetroCluster. Poor HP! If you do a search of NetApp MetroCluster, I am sure you will come across these 2 HP blogs in 2010, deriding the MetroCluster solution. Check out this and the followup on the first blog. What these guys chose to do was to break the MetroCluster apart into 2 single controllers after a network failure, and attack it from that level.

Yes, when you break up the halves, it is basically a NetApp system with several single point of failure (SPOF). But then again, who isn’t? Almost every vendor’s storage will have some SPOFs when you break the mirror.

Well, I can tell you is, the weakness of NetApp MetroCluster is, it’s not continuous data protection (CDP). Once your applications have written garbage on one volume, the garbage is reflected on the mirrored volume. You can’t roll back and you live with the data corruption. That is why storage vendors, including NetApp, offer snapshots – point-in-time copies where you can roll back to the point before the data corruption occurred. That is why CDP gives the complete granularity of recovery in every write I/O and that’s something NetApp does not have. That’s NetApp’s MetroCluster weakness.

But CDP is aimed towards data recovery, NOT data availability. It is focused on customers’ whose requirements are ability to get the data back to some usable state or form after the event of a disaster (big or small), while the MetroCluster solution is focused on having the data available all the time. They are 2 different set of requirements. So, it depends on what the customer’s requirement is.

Then again, come to think of it, NetApp has no CDP technology of their own … isn’t it?

Violin pulling the strings

Violin Memory is in our shores as we speak. There are already confirmed news that an EMC veteran in Singapore has joined them and will be surfacing soon in the South Asia region.

Of all the all-Flash storage systems I have on my platter, Violin Memory seems to be the only one which is ready for IPO this year, after having taking in USD$75 million worth of funding in 2011. That was an impressive number considering the economic climate last year was not so great. But what is so great about Violin Memory that is attracting the big money? Both Juniper Networks and Toshiba America are early investors.

I am continuing my quest to look at all-Flash storage systems, after my blogs on Pure Storage, Kaminario and SolidFire. (Actually, I wanted to write about another all-Flash first because it keeps bugging me with its email .. but I feeling annoyed about that one right now). Violin Memory is here and now.

From a technology standpoint, there are a few key technologies, notably their vRAID and their Violin Switched Memory architecture (vXM), both patent pending. Let’s explore these 2 technologies.

At the core of Violin Memory is the vXM, a proprietary, patent-pending memory switching fabric, which Violin claims to be the first in the industry. The architecture uses high speed, fault tolerant memory controllers and FPGA (field programmable gate arrays) to switch between corresponding, fully redundant elements of VIMMs (Violin Inline Memory Modules). The high level vXM architecture is shown below:

 

VIMMs are the building blocks that are the culmination of memory modules, which can be from different memory types. The example below shows the culmination and aggregation of Toshiba MLC chips, which eventually bore the VIMMs and further consolidation into the full capacity Flash array.

The memory switching fabric of the vXM architecture enables very high speed in data switching and routing, and hence Violin can boast of having “spike-free latency“, something we in this industry desperately need.

Another cool technology that Violin has is their hardware-based vRAID. This is a RAID algorithm that is designed to work with Flash and other solid state storage devices. I am going through the Violin Memory white paper now and the technology is some crazy, complicated sh*t. This is presented in their website about the low latency, vRAID:

 

I don’t want to sound stupid writing about the vRAID now, and I probably need to digest the whitepaper several times in order to understand the technology better. And I will let you know once I have a fair idea of how this works.

More about Violin Memory later. Meanwhile, a little snag came up when a small Texas company, Narada Systems filed a suit of patent infringement against Violin on January 5, 2012. The suit mentioned that the vXM has violated the technology and intellectual property of patent #6,504,786 and #7,236,488 and hence claiming damages from Violin Memory. You can read about the legal suit here.

Whether this legal suit will affect Violin Memory is anybody’s guess but the prospects of Violin Memory going for IPO in just a few short years validates how the industry is looking at solid state storage solutions out there.

I have already mentioned a handful solid state storage players who are I called “all-Flash”, and from the Network Computing sites, blogger Howard Marks revealed 2 more stealth-mode, solid state start-ups in XtremIO and Proximal Data. This validates the industry’s confidence in solid state storage, and in 2012, we are going to see a goldrush in this technology.

The storage industry is dying for a revamp in the performance side, and living the bane of poor spinning disks performance for years, has made the market hungry for IOPS, low latency and throughput. Solid state storage is ripe and I hope this will trigger newer architectures in storage, especially RAID. Well done, Violin Memory!