RedHat to acquire Gluster

This is breaking news. RedHat is to acquire Gluster!

What is Gluster? Gluster is a clustering Linux distribution started by Z Research under the direction of Anand Babu (who is currently Gluster’s CEO) aiming to commoditize supercomputing and supercomputing clustered storage. Gluster is open source but there is a commercial version as well. It runs on commodity 64-bit x86 hardware. The Gluster File System (GlusterFS) aggregates disks and memory resources into a pool of storage thru a single global namespace and accessed through multiple file-level protocols. The scale-out architecture is where storage resources can be added as a storage node in a building block fashion to meet performance and capacity demands, rather like what HP P4000 is doing to the block-level environment for SAN.

Gluster can integrated with most 64-bit Linux distros. This is done at the Linux user space but it can also be crafted at the Linux kernel space, where it is a software appliance, easily integrated into off-the-shelf 64-bit x86-64 platforms. This means that you can build a scale-out NAS pretty easily using your own hardware.

From an architecture standpoint, GlusterFS and its integration to a storage appliance looks like this:

 

Because it works in a modular add-on fashion, this architecture is distribution and extended by replicating the same architecture across additional x86-64 platforms (which is a storage node) as shown below.

 

It’s really easy to install Gluster and build the Scale Out NAS. I have been saving a couple videos about how Gluster is installed and I must say that it’s pretty easy. In less than 30 minutes, you can install your first Gluster storage node and then add additional nodes on the fly.

Enjoy the videos.

Video #1 (Gluster Installation)

(I have difficulty uploading the videos because WordPress requires me to purchase one of their solutions)

Video #2 (Creating and adding Storage Node in Gluster)

(I have difficulty uploading the videos because WordPress requires me to purchase one of their solutions)

Note: If you are interested to see the videos, please email to me at chin-fah.heoh@storagenetworking-academy.com.

This news gets me very excited because this is the perfect endorsement of what I have been saying all along. Storage networking and data management are the foundations of CLOUD and VIRTUALIZATION. Without data being stored and managed well, everything falls apart. And as I have mentioned many times before, this is a fantastic time to become an extra-ordinary storage engineer/consultant/architect/sales (maybe not!)

 

HP P4000 – Pretty impressive

After being in the storage networking industry for so long, I have seen most of the new storage solutions out there. Most of them don’t really differ much from what already out there, and it gets a little boring. But once in a while, a little gem is unearthed and my excitement bubbles up again.

Today, I was at the HP P4000 G2 SAN workshop and the LeftHand Networks SAN/iQ storage solution which HP acquired in 2008 left me with 3 words – Interesting, Innovative and Impressive – from a technology standpoint.

I must admit that this is a little gem that got past my radar and now it’s HP’s gain. I have heard about LeftHand Networks in the past, and at the same time, I was also looking at another storage solution called Intransa. Unfortunately, Intransa went on to differentiate themselves and today, they are focused more as a storage solution for videos and CCTVs, seldom surfacing with innovative technology. LeftHand Networks was and is different and I can understand why HP bought them, because the technology that they bring with them to HP is really cool!

Now rebranded and renamed as HP P4000 G2 SAN, the storage solution no longer sits on proprietary hardware. As part of HP’s Converged Infrastructure strategy, the SAN/iQ has been fully integrated into the HP Proliant x86 platform (I heard there’s a blade version as well), making it simple to procure and probably helps simplify operational resource planning and logistics as well. At the same, there is also a P4000 VSA (Virtual Storage Appliance) as well, which HP guys have been using for demo for several years now. There is a 60-day trial available at the HP P4000 VSA Download site, for organizations to have a try-and-buy and if they do, they can turn some of their old x86 platforms into a storage appliance by just adding more hard disk drives. That’s saves money too!

So, what’s cool, you say?

2 key technologies stands out

  • Storage Clustering
  • Network RAID

As I was well informed at the workshop today, the Storage Clustering technology is not exclusive to the P4000. In fact, Dell EqualLogic employs something similar as well. But it was something that impressed me and it is different from the traditional storage SANs that we usually see.

You see, in the traditional SAN setup, the LUNs or volumes are either loosely or tightly linked to 2 active/active storage processors/controllers. And the way most of the storage vendors do, when a customer runs out of capacity or performance or both, they would have to do a forklift upgrade of the controllers. This is something that is disruptive and also does not allow CPU, memory or I/O channels upgrade to the existing controller. Today, most storage vendors do not allow you to break open the storage processor chassis and change the CPU, add more RAM or add more I/O paths to support more disk drives or increase throughput. Mind you, this is something that I have been questioning for a long time but as the storage networking industry has it, you got to upgrade the entire storage processor or controller in order to get more power and capacity.

The P4000 (as well as the Dell EqualLogic) approaches this from another angle where instead of doing a forklift upgrade of the storage processor/controller, just add another node of the same CPU and RAM profile, and have the P4000 SAN/iQ software group the new node together with the existing node(s) to form a storage cluster group. As best practice, the Storage Cluster feature should have 16 nodes or less, but in one of the war stories shared, one customer in the US actually had 32 nodes in a Storage Cluster group, for storage capacity reasons.

As more nodes are added to the Storage Cluster group, the LUNs/volumes can be extended or spanned to the other nodes as long as they are physically connected in a Gigabit network and the entire LUN or volume is been seen as ONE  irregardless of which physical nodes it may be sitting. Typically you will see this sort of thing of single “Global Namespace” concept at the file system level but this is the first time I have seen it implemented at the SAN level. (Ok, I have to admit that I am a little behind times with this technology)

Here’s a little diagram I dug up from LeftHand before it was acquired by HP which I hope will enlightened the readers about this Storage Cluster feature.

 

But the best is yet to come as the HP Solution Architect (Timothy Chua) mentioned that the Network RAID feature was uniquely LeftHand’s and way cooler. And I couldn’t agree more because this lighted me up like a spark plug!

Since Storage Clustering could span LUNs/volumes across nodes, it was only natural that the RAID capability be extended across nodes as well. RAID-10, RAID-5, RAID-6 could all be spanned across all nodes, spread the data blocks and its mirrored/parity data blocks across the nodes in the network. And the nodes does not have to at a single site. With Gigabit networks, the nodes can be separated into multiple sites as well, giving the entire solution quite a comprehensive campus-wide storage high availability. And since this is Network RAID, it gives an entirely new meaning to the word Disaster Recovery because this will eliminate the need for data replication. Primary data in a Network RAID-10 in Node 1/Site 2 could be mirrored in Node 2/Site 2, which can be further mirrored to Node 3/Site 3 and Node 4/Site 4 for a 4-way mirror. This is the P4000 Multi-site SAN solution.

The diagram below shows how Network RAID is implemented with VMware ESX.

 

And since replication is no longer a requirement, VMware’s SRM (Site Recovery Manager) is also not required as well.

It is no surprise that synchronous replication in the P4000 solution is equivalent to Network RAID. Though the concept of separating the storage controllers/nodes into multiple sites for true long-distance mirroring exists, they usually don’t exist at this level. NetApp has their Fabric and Stretch MetroCluster and EMC has their VPlex, but they usually are proposed at the higher end of the spectrum. Looks to me that HP P4000 is the only one that has this concept at the entry level iSCSI SAN level. Kudos!

They have an asynchronous replication as well for longer distance networks.

I did not stay for the demo today but I am already tickled pink about the HP P4000 technology. It had a good impression on me and I can’t wait to know more of how it works internally. Looking forward to a deeper dive of the P4000 and hope to stay for the demo next time.

More specialized appliances at Oracle OpenWorld

I was reading the news from Oracle OpenWorld and a slew of news about specialized appliances are on the menu.

Oracle added Big Data Appliance and Oracle Exalytics Business Intelligence Machine to its previous numero uno, Exadata Database Machine. EMC, also announced its Green Plum Data Computing Appliance and also its VNX Unified Storage for Oracle.

As quoted

The EMC VNX Unified Storage for Oracle is a VNX system that has 
Oracle installed in a VMware vSphere virtual machine environment. 
The system is meant to unify all Oracle environments--database over 
Oracle Direct NFS, application servers over NFS, and testing and 
development over NFS--resulting in less disk space used and faster 
testing. EMC says this configuration was made because 50% of Oracle 
customers are virtualizing their systems today.

The VNX Unified Storage for Oracle includes EMC's Fully Automated 
Storage Tiering (FAST) technology, which migrates most frequently 
used data between a primary Fibre Channel drive and solid state drives 
and migrates less frequently used data to Serial ATA (SATA) drives and 
its FAST Cache. In an Oracle environment, FAST is well-suited to 
database applications that generate a large number of random 
inputs-outputs, that experience sudden bursts in user query activity, 
or a high number of user loads and where the entire working set can 
be contained in the solid state drive cache.

Based on testing carried out on an Oracle Real Application Clusters 
(RAC) 11g database that was configured to access the VNX7500 file 
storage over the Network File System (NFS), using the Oracle 
Direct NFS (dNFS) client, results showed an 100% improvement in 
transactions per minute (TPM), 170% improvement in IOPS, and 
a 79% decrease in response time, the company said.

As for GreenPlum, EMC quoted:

The company also is showing off the EMC Greenplum Data Computing 
Appliance(DCA) for Big Data Analytics configuration, which provides 
a new migration path to Greenplum for Oracle Data Warehouse. This 
system includes the Greenplum Data Computing Appliance, EMC's 
Global Data Warehouse, and EMC's IT Business Intelligence Grid 
infrastructure. The EMC Greenplum DCA consists of 8 to 16 segment 
servers running Red Hat Enterprise Linux. Each segment server 
contains 96 to 192 processor cores, with 384 GB to 768 GB of 
memory per segment server. The DCA includes 12 600-GB Serial 
Attached SCSI (SAS) 15K RPM drives for a total useable and 
compressed capacity of 73 TB to 144 TB. The DCA competes with 
Oracle's Exadata Database Machine.

In tests performed with this server/storage configuration and a 
15-TB Oracle Data Warehouse, the DCA processed a 99 million rows 
query in less than 28 seconds vs. seven minutes in a traditional 
Oracle environment and data loads decreased from six days to 29 
minutes

It is getting pretty obvious that specialized appliances are making waves at Oracle OpenWorld but what’s more interesting is the return of a combined and integrated environment of compute and storage as I have mentioned in my previous blog. And I forsee that these specialized appliances will be one of the building blocks of cloud computing together with general purposes platforms such as x86, JBODs and the glue to all these, virtualization, notably VMware.

Storage Tiering – Responsible and Prudent

Does your IT have bottomless budget? If not, storage tiering is likely to be considered as one of IT’s weapons to combat the ever growing need for storage capacity.

Storage tiering is not new and in the past, features such as HSM (Hierarchical Storage Management) and ILM (Information Lifecycle Management) addresses storage tiering in different capacities, ranging for simple aging files movement and migration, to data objects being moved within the data infrastructure of an organization with some kind of workflow and searching capabilities.

Lately, storage tiering, and especially automated storage tiering, has been gaining prominence, thanks to the 2 high profile acquisitions – HP 3PAR and Dell Compellent. According to Wikibon,

Tiered storage is a system of assigning applications to different
types of storage media based on application requirements. Factors
considered in the allocation of storage type include the level of
protection needed, performance requirements, speed of recovery,
and many other considerations.Since assigning application data to
specific media may be complex, some vendors provide software for
automatically managing the process.

For the sake of simplicity, this blog talks about automated storage tiering within the storage array itself, where different data blocks are moved within several tiers to achieve just-right storage provisioning. Why do we need to achieve this “just-right provisioning”? Rather than discussing this from an IT, technical angle, the just-right storage provisioning should be addressed from a business and operational angle, and more rightly so, costs and benefits.

Business and operations are about managing costs and increasing profits. In the past, many storage administrators employ a single storage tier architecture. Using the same type of disks, for example, 146G 10,000RPM Fibre Channel disks, there was usually 1 or 2 RAID levels for the entire data storage requirement. Usually RAID 1+0 volumes/LUNs are for the applications that require the highest performance and availability but they come with a big cost. So, the rest of the data are kept in RAID-5 volumes/LUNs. The introduction of enterprise SATA hard disk drives basically changed the rules of the ball game, giving storage administrators another option, a cheaper alternative to store their data. Obviously, storage vendors saw the great need to address this requirement, and hence created automated storage tiering as part of their offerings.

There are quite a few storage solutions that offers the storage tiering feature, and most of them are automated as well, meaning that the data blocks are moved between the different tiers of storage within the array itself automatically. 3PAR, long before they were acquired by HP, had their Dynamic Autonomic Tiering. Today, with HP, 3PAR offers 2 key strengths in their Autonomic Tiering offering.

  • Adaptive Optimization
  • Dynamic Optimization

As HP puts it,

 

Not to be outdone, Compellent (also long before its acquisition by Dell) had the Data Progression feature as part of the Automated Storage Tiering offering. In a nutshell, their solution (which is basically similar from a 10,000 feet view with most of the competitors) is shown below.

 

The idea is to put the most frequently accessed data blocks to the most expensive, fastest, storage tier and then dynamically move the lesser accessed data block to the least expensive, most economical tier.

I have had the privilege to learn more about Compellent (before Dell) technology about 2.5 years ago, thanks to my friends Chyr and Winston, the bosses at Impact Business Solutions. And what Compellent has was pretty cool stuff and I would like to share what I have picked up about Dell Compellent storage solution. But some of the information could be a little out of date.

The foundation of Dell Compellent automated storage tiering feature, called Data Progression, is their Dynamic Block Architecture (as shown below)

 

From a high level, all data blocks are bunched together into a logical data structure called a page. A page is by default 2MB but can be configured between 512KB and 4MB. The page is the granular unit required to initiate and implement the Data Progression feature in Compellent’s automated storage tiering solution. Every page comes with attached metadata about the page such as

  • When was this page created
  • When was this page last accessed
  • Which RAID level is it currently in (RAID 1+0, RAID-59, RAID-55 and so on)
  • Which Tier does it currently reside (Tier 1, 2 or 3)
  • Which kind of disk track does it live in (Fast or Standard)

Meanwhile, there are different storage Tiers and notably, Tier 1, 2 or 3 where different disk profiles reside. Typically, the SSDs or the 15K RPM disk drives will be in Tier 1, the 10K RPM disk drives will be in Tier 2 and the slowest 7200 RPM disks will be in Tier 3.  Each of the 3 tiers are further divided into the outer Fast disk cylinders (where the platters spin the fastest) and the Standard disk cylinders (running in the inner tracks and slower).

As data chunks or blocks are accessed, their frequency of access and their data movement statistics are gathered in real-time, giving the Compellent solution a fairly good intelligence of how the pages should be laid out on the most relevant tiers. As the pages become more stale, and less relevant, the pages of data chunks are progressively relegated to the lower tiers, while the more active, and most relevant pages relative to importance of access, is progressively promoted to the higher tiers.

Different policies can also be configured to ensure that some important pages stay where they are regardless of their frequency of access or their relevance.

There is a very nice whitepaper from Dell detailing their Data Progression technology.

Another big automated storage tiering player is HP 3PAR. I admit that I don’t know the inner details of the HP 3PAR Dynamic Tiering solution, though I had some glossy lessons from a 3PAR Systems Engineer called Nathan Boeger (thanks to my friends at PTC Singapore, the 3PAR distributor back then) about the same time I learned about Dell Compellent. I hope HP can offer to introduce more in depth of how the 3PAR technology works, now that I have gotten cosy with some of the HP Malaysia’s folks.

Similarly, the other big boys are offering the automated storage tiering solution as well. IBM has been offering Easy Tier for almost 18 months and EMC has its FAST2 for about the same time.

Funnily, the odd one out in this automated storage tiering game is NetApp. I was in a partner conference call about 1 year ago and there were questions asking NetApp about their views of automated storage tiering. At that time of the concall, NetApp did not believe in automated storage tiering, preferring to market their FlashCache PCIe (previously called the PAM card) solution. Take note that the FlashCache is a Read-Only “extension” to their NVRAM, and used to accelerate read operations of WAFL. And also take note that NetApp, at the time of writing, does not have an “engine” that performs automated storage tiering, regardless of how they spin it.

There are also host-based file tiering solutions as well.Since I am familiar with the NetApp universe, Arkivio and Enigma Data Solutions are 2 of the main partners that NetApp works with. Recently NetApp also resells StorNext from Quantum. But note that these host-based solutions are file-based, making them less granular, less dynamic and less efficient. They are usually marketed as file archiving solutions, and the host-based license are usually charged by per TB. In large enterprises, this might make sense but for the everyday Joes (with tight IT budgets), host-based file archiving solutions are expensive. And it is nowhere close to the efficiencies of automated storage tiering.

Overall, automated storage tiering, when applied, should help the IT operations and the organization’s business reduce costs. There is no longer a one-size-fit-all model and associating the right storage tier to the relevance and importance of the data at a very granular sub-LUN/sub-volume level will help any organization define a more prudent approach in managing their data actively and more importantly their cost of operations.

This is called Responsible IT. 😀

ONTAP vs ZFS

I have to get this off my chest. Oracle’s Solaris ZFS is better than NetApp’s ONTAP WAFL! There! I said it!

I have been studying both similar Copy-on-Write (COW) file systems at the data structure level for a while now and I strongly believe ZFS is a better implementation of the COW file systems (also known as “shadow-paging” file system) than WAFL. How are both similar and how are both different? The angle we are looking at is not performance but about resiliency and reliability.

(Note: btrfs or “Butter File System” is another up-and-coming COW file system under GPL license and is likely to be the default file system for the coming Fedora 16)

In Computer Science, COW file system are tree-like data structures as shown below. They are different than the traditional Berkeley Fast File System data structure as shown below:

As some of you may know, Berkeley Fast File System is the foundation of some modern day file systems such as Windows NTFS, Linux ext2/3/4, and Veritas VxFS.

COW file system is another school of thought and this type of file system is designed in a tree-like data structure.

In a COW file system or more rightly named shadow-paging file system, the original node of the data block is never modified. Instead, a copy of the node is created and that copy is modified, i.e. a shadow of the original node is created and modified. Since the node is linked to a parent node and that parent node is linked to a higher parent node and so on all the way to the top-most root node, each parent and higher-parent nodes are modified as it traverses through the tree ending at the root node.

The diagram below shows the shadow-paging process in action as modifications of the node copy and its respective parent node copies traverse to the top of the tree data structure. The diagram is from ZFS but the same process applies to WAFL as well.

 

As each data block of either the leaf node (the last node in the tree) or the parent nodes are being modified, pointers to either the original data blocks or the copied data blocks are modified accordingly relative to the original tree structure, until the last root node at the top of the shadow tree is modified. Then, the COW file system commit is considered complete. Take note that the entire process of changing pointers and modifying copies of the nodes of the data blocks is done is a single I/O.

The root at the top for ZFS is called uberblock and called fsinfo in WAFL. Because an exact shadow of the tree-like file system is created when the data blocks are modified, this also gives birth to how snapshots are created in a COW file system. It’s all about pointers, baby!

Here’s how it looks like with the original data tree and the snapshot data tree once the shadow paging modifications are complete.

 

However, there are a few key features from the data integrity and reliability point of view where ZFS is better than WAFL. Let me share that with you.

In a nutshell, ZFS is a layered architecture that looks like this

The Data Management Unit (DMU) layer is one implementation that ensures stronger data integrity. The DMU maintains a checksum on the data in each data block by storing the checksum in the parent’s blocks. Thus if something is messed up in the data block (possibly by Silent Data Corruption), the checksum in the parent’s block will be able to detect it and also repair the data corruption if there is sufficient data redundancy information in the data tree.

WAFL will not be able to detect such data corruptions because the checksum is applied at the disk block level and the parity derived during the RAID-DP write does not flag this such discrepancy. An old set of slides I found portrayed this comparison as shown below.

 

Another cool feature that addresses data resiliency is the implementation of ditto blocks. Ditto blocks stores 3 copies of the metadata and this allows the recovery of lost metadata even if 2 copies of the metadata are deleted.

Therefore, the ability of ZFS to survive data corruption, metadata deletion is stronger when compared to WAFL .This is not discredit NetApp’s WAFL. It is just that ZFS was built with stronger features to address the issues we have with storing data in modern day file systems.

There are many other features within ZFS that have improved upon NetApp’s WAFL. One such feature is the implementation of RAID-Z/Z2/Z3. RAID-Z is a superset implementation of the traditional RAID-5 but with a different twist. Instead of using fixed stripe width like RAID-4 or RAID-DP, RAID-Z/Z2 uses a dynamic variable stripe width. This addressed the parity RAID-4/5 “write hole” flaw, where incomplete or partial stripes will result in a “hole” that leads to file system fragmentation. RAID-Z/Z2 address this by filling up all blocks with variable stripe width. A parity can be calculated and assigned with any striped width, as shown below.

 

Other really cool stuff are Hybrid Storage Pool and the ability to create software-based caching using fast disk drives such as SSDs. This approach of creating ReadZilla (read caching) and LogZilla (write caching) eliminates the need for proprietary NVRAM as implemented in NetApp’s WAFL.

The only problem is, despite the super cool features of ZFS, most Oracle (not Sun) sales does not have much clue how to sell ZFS storage. NetApp, with its well trained and tuned, sales force is beating Oracle to pulp.

Don’t just look at disk reliability!

I am sure that many of you in the storage networking industry can relate to this very well.

When 1 or 2 disk drives fail, the customer will usually press you for an answer and usually this question will pop up. “How come the MTBF is 1.5 million hours but the drive(s) failed after a few months? We also get asked of “How reliable are the disks?” “How sure are you that the storage disks I buy will last?”

And for us in this line, we cannot deny the fact that the customer should be better informed (or at least we get cheesed off by these questions). A few blogs ago, I took the easy way out and educated the customer about MTBF (Mean Time Between Failure). This is only a quarter of the story because MTBF alone does not determine the reliability of the storage ecosystem and the reliability of the storage ecosystem (which translates to data availability) is something that the customer should ask rather than spending their time pressing their annoyance onto you about 1 or 2  disk failures.

I also want to say a little about another disk reliability statistics called AFR. More about that later.

Let’s get a little deeper with disk MTBF. Disk MTBF is a statistically calculated, pre-production measurement. The key word here is “PRE” meaning that THIS IS NOT A FIELD TESTED statistics! This is a statistical likelihood of how long a disk device will last.

One thing to note is how MTBF is derived. In fact, MTBF is established before the entire disk drive line goes into volume production. Typically, there is a process called Real Demonstration Test (RDT). RDT involves putting about 1,000 or more drives into a testing chamber, running them very hard, in elevated temperatures with 100% I/O for about 6-8 weeks. This is to simulate the harshest of an operating environment and inevitably, some disk drives will fail. From these failures, the MTBF is calculated.

A enterprise hard disk drives MTBF will usually be between 1.2 million to 2.0 million hours while the consumer grade drives usually have MTBF of about 300,000-600,000 hours. Therefore, it is important to educate customers because customers like to use some home office/SMB storage solutions to compare with the enterprise storage solution you are about to propose to him.

One of the war stories I heard was from a high-definition video production house. They get hundreds of thousands of Malaysian Ringgit worth of contract from a satellite TV content provider. But being less “educated” (could also be translated to being cheapo), they decided to store their valuable video contents on Buffalo NAS storage. And video production environments can be harsh. The I/O stress on the disks are strenuous and the Buffalo NAS disks crashed. They lost all contents (I don’t know what happened to their backup), and they were fined hundreds of thousands of Malaysian Ringgit and had their contract terminated on the spot. This is not to say that the Buffalo NAS is a poor product, but they got the wrong product for the job. You can’t expect to race the Formula 1 with an old jalopy, can you? You got to get the right solution for the job, even if it costs more.

So the moral of the story is – “Educate yourself and be prepared to invest if the dollar value of the data is more important than what are you think you might be cost-saving”

Over the years, MTBF (even though it is still very much in use today) is getting less and less useful as a reliability measurement. So, what’s better? AFR!

AFR or Annualized Failure Rate has been in use for almost 10 years now, and Seagate, the hard disk manufacturer, uses the AFR value heavily. AFR is the percentage of the installed bases of hard disk drives that failed and returned to factory in a given year. This is a more realistic figure and it is the statistics from the field. The typical value for enterprise disk drives  is usually between 0.7-1.0% although a few years ago, Google created a splash in the industry when they reported in an AFR of 36%. For those who would like to read Google’s paper, click here.

Therefore AFR is a more reliable measurement of disk reliability than MTBF.

But disk reliability is just a 1/4 of the story. We need to be out there educating the customers about the storage ecosystem reliability rather than a specific component. The data availability is paramount because components will fail throughout the lifecycle of the solution. That is why there are technology like RAID, snapshots, backup, mirroring and so on to ensure that the data is made available for the operations and businesses to continue.

Ultimately, if the customer wants to use the disk MTBF onto you, he’s basically shooting at you with the wrong bullet. It’s time you storage networking professional out there educate the customers.

HP StoreOnce – Further Depth

I promised last week I will look deeper into HP StoreOnce technology and I did. As I mentioned in my previous blog, HP StoreOnce technology now embedded in its D2D series of secondary, target backup devices that does the job with no fuss and no fancy bells and whistles.

Here’s the lineup of the present HP D2D solutions.

 

HP Malaysia has constantly reminded me that their D2D deduplication solution is much more price competitive than their competitors and this is something you, the readers, have to find out on your own. But I do believe that they are. Unfortunately they did not have the first mover’s advantage when Data Domain took the industry by storm in 2009, since HP StoreOnce was only launched with much fanfare last year in June 2010. Despite that, there still plenty of room in the IT market to grow, especially in HP’s huge set of customers.

Without the first movers advantage, HP StoreOnce has to differentiate itself from the existing competitors such as EMC Data Domain and Quantum. Labeling their deduplication technology as version 2.0 (whereas the competitors are still at “Version 1.0”?), HP StoreOnce banks on 3 key technologies. They are

  • Sparse Indexing
  • Intelligent Block Size Management
  • Reduction in Disk Fragmentation

Out of these 3, sparse indexing is the most interesting but I will save the best from last. Let’s start with Intelligent Block Size Management.

HP StoreOnce uses a variable chunking method with a smaller granularity of 4K in size and this is managed intelligently, thus achieving a higher deduplication ratio compared to its competitors which either uses a fixed chunking method or with a variable chunking method of larger block sizes in the range of 8K to 32K. The HP Lab’s testing reveals that the space savings was significant when compared with others.

Below are a set of results for a PowerPoint presentation and you can see for yourself.

 

(NOTE: Please note that the savings/deduplication ratio can be very different and can range from good to bad for different types of data. Video and images files are highly encoded. Seismic and geo-mapping files are highly compressed. It is very likely that most deduplication solutions cannot achieve a high percentage with these types of files)

Point #2 talks about Reduction in Disk Fragmentation. The inherent benefits from Intelligent Block Size Management brings about the Reduction in Disk Fragmentation. The smaller chunks means lesser space wastage, especially when the block size is 4K or lower. HP StoreOnce also uses an intelligent algorithm to place the blocks that are perceived to be related close to one another. Hence this “locality” presence helps and the retrieval and restore process will be faster and more efficient.

Sparse Indexing is where HP StoreOnce touts to be a game changer. Today’s data is already as massive as a mountain, and it’s going to get bigger and growing faster. Using “Version 1.0” type of deduplication, the hashes created are stored in either memory or on disks. However, the massive data sets (especially unstructured data) are already producing massive amounts of hashes. Hashes are used to identify unique data blocks but the avalanche of unstructured data means that most deduplication solutions are generating more and more hashes, making most Version 1.0s hashes sluggish and difficult to retrieve.

Sparse Indexing addresses this hash problem (by the way, HP StoreOnce uses SHA-1 hash) by intelligently sampling a small chunks and creating a very fast index lookup mechanism that stays in the system’s memory all the time. As the engineers at HP Labs put it

Instead of holding every index item in RAM ready for comparison,
the HP team keeps just one in every hundred or so items in RAM
and puts the rest onto a hard drive. Duplicate data almost
always arrives in bursts. In other words, if one chunk of the
arriving stream is a duplicate, it is very likely that many
following chunks are duplicates. Sparse indexing takes advantage
of this phenomenon by storing the sequence of hashes of the
stored chunks next to each other on disk. As a result, a ‘hit’
in the sample RAM index can direct the system to an area of
the disk where many duplicates are likely to be found.

Sparse Indexing is not unique in the industry, but the engineers at HP Labs have put their thinking hats on and applied it to improve the search and looking up of the hashes in the StoreOnce deduplication technology.

Further savings are also achieved when the deduped data is compressed with the LZ (Lempel-Ziv) compression method before it is stored into the disks.

The HP StoreOnce technology is 100% fully concocted in the renown HP Labs and according to sources, this technology will indeed permeate across all HP StorageWorks (HP has since renamed it to HP Storage) line. With this strategy, HP hopes to address the “fragmented and complicated” (as quoted by HP) deduplication and data protection strategy across the enterprise. By “fragmented and complicated”, they mean that the deduplicated data constant has to be rehydrated and deduped again as the data moves across different IT devices and functions.

In a perfect world, HP wants their StoreOnce technology to be like the diagram below.

 

However, one very interesting fact that I found was HP does not believe that primary storage deduplication is a good idea. They claim that it complicates the whole thing. Whether HP likes it or not, NetApp has been dishing out primary storage deduplication for several years now and you don’t see their customers unhappy with NetApp about this feature.

In one of the HP Business whitepapers I read, one of the takeaways was

 

I was like, “Whoa! What’s this?”. I felt bemused about what was mentioned in the whitepaper. After all the best claims of the HP StoreOnce technology, I can’t help but to think that this could be a banana skin on the pavement for HP.


Joe Tucci to quit as EMC’s CEO

News of Joe Tucci quitting EMC at the end of 2012 is abuzz tonight. Here’s one from The Register.

He is one of the longest serving CEO in the storage industry and since he took over the helm in 2001, he has brought EMC to where it is today. Like him or loathe him, you cannot deny that he is one of the best out there. Having gone thru at least 3 economic downturns, he has turned EMC into an industry giant, a juggernaut.

The next question is who will succeed him? There are many candidates from long-serving senior staff to the new ones that EMC has recruited in the recent years. It will only be end of 2012 when Joe finally leaves EMC but the search for his successor will be an interesting one. We shall soon know.

The rise of the specialized appliance

Compute and storage are 2 components within the IT infrastructure which are surely converging. SAN and NAS are facing their greatest adversary yet, and could be made insignificant if the cloud and virtualization game had their way. This is giving rise to the a new breed of solution, a specialized appliance where both compute and storage are ONE. Rising from the ashes of shared storage (SAN and NAS, take note), we are beginning to see things going back to way of direct, internal storage.

There were some scuffles in the bushes about 5 years, where Sun (now Oracle) was ahead of its game. The Sun Fire X4500 (aka Thumper) was one of the strong candidates to challenge the SAN/NAS duopoly in this networked storage period. X4500 integrated both the server and the storage components together, using ZFS as a file system and volume manager to deliver a very high throughput on all the JBOD disks very efficiently. ZFS acted as the RAID, so there was no need to have specialized RAID hardware. This proved that a very high performance storage solution can be easily integrated using standard off-the-shelf infrastructure components and the x86 architecture. By combining both compute and storage together, there were hints that the industry was about to rise up to Direct-Attached Storage (DAS) again, despite its perceived weakness against SAN and NAS.

Unfortunately, the applications were not ready for DAS then. Besides ZFS, applications such as databases, emails and file servers were not ready to jump into the DAS bandwagon and watch them ride into the sunset. But the fairy tale seems to be retold again, and this time, the evidence that DAS could rise again is much stronger.

The catalyst to this disruptive force? Virtualization!

I mentioned that VMware is the silent storage killer a few blogs ago. Needless to say, that ruffled a few featheres among the readers. I have no doubt that virtualization is changing how we storage guys look at SAN and NAS. In a traditional setup, the SAN or NAS is setup to provision LUNs or mount points to the data storage for VMFS volumes in the VMware environment. It will then be the storage array to provide snapshots, replications, thin provisioning and so on.

Perhaps VMware is nit picking that managing storage arrays for VMFS volumes is difficult. From the VMware administrators view, they are right. They don’t want to know what’s going on below the VM-level. All they want is storage, any kind of storage and VMware will manage the volumes, snapshots, replication and thin  provisioning. Indeed they were already doing that since vStorage API was introduced. In the new release of VMware version 5.0, the ante has been upped even higher, making networked storage less and less significant.

If you want to know about vStorage API and stuff, below is a diagram of the integration of the various components at the VMware API level.

 

VMware can now use direct, internal storage look like shared storage. The Virtual Storage Appliance (VSA) does just that. VMware already has a thriving market from the community and hobbists for VMware Appliances.

The appliance market has now evolved into new infrastructure too. Using x86 architecture, off-the-shelf infrastructure components (sounds familiar?), companies such as Nutanix and Tintri are taking advantage of this booming trend to introduce specialized VMware appliances as shown in their advertisements on their respective web sites.

Here’s the Nutanix Ad:

 

Here’s the Tintri Ad:

 

Both Tintri and Nutanix are a new breed of appliances – specialized appliances for VMware.

At the same time, other applications are building these specialized appliances as well. I have mentioned Oracle Exadata many times in the past and Oracle Exadata is the perfect example an a fine-tuned, hardcore database engine to make the Oracle run at the best performance possible.

Likewise HP has announced their E5000 Messaging System for Microsoft Exchange. The E5000 is a specialized appliance optimized and well-tuned for the Microsoft Exchange Server 2010. From the words of HP,

“HP E5000 Messaging System is the industry’s first fully self-contained platform built for the next-generation of Microsoft Exchange to deliver enterprise-class messaging to businesses of all sizes. Built as a turnkey solution that can be up and running in a few hours vs. days, the HP E5000 Messaging System gives business users the experience they want most: large mailboxes, centralized archiving of mailboxes files and 24×7 access from any device. IT staffs benefit the solutions simplicity to setup, scale and manage and to meet new demands affordably. Ideal for multi-site enterprises as well as branch office and remote office environments, each HP Messaging System delivers greater simplicity and accelerates deployment with preconfigured solutions starting at 500 mailboxes up to 3000 mailboxes, while delivering large, 1 to 2.5GB mailbox sizes. Clients can grow by adding storage capacity or more appliances within the environment up from hundreds to thousands of mailboxes.”

What are the specs of this E5000 box, you say? Here you go:

 

And look at Row#2 in the table above … Direct, Internal Disks! Look at Row #4, Xeon CPUs! Both Compute and Storage in the same appliance!

While the HP E5000 announcement was recently, Hitachi Data Systems were already in the game early with their Unified Compute Platform and their Converged Platform for Microsoft Exchange with relatively the same idea – specialized appliances.

Perhaps the HDS solutions aren’t exactly direct, internal storage but the concept is still the same – specialized appliance. HDS Unified Compute Platform (UCP) has these components.

 

HDS Converged Platform for MS Exchange provides their specialized “appliance” with Reference Architectures that can support up to 68,000 Microsoft Exchange mailboxes. Here’s an architecture diagram of their “appliance”

 

There’s no denying that the networked storage landscape is changing. So are the computing platforms. We are already seeing the compute and storage components being integrated together, tighter than ever. The wave is rising for specialized appliances and it can only get more intense from now on.

No wonder HP’s Converged Infrastructure vision is betting on x86 architecture, simple storage platforms with SAS/SATA disks and Virtualization. Other vendors are doing the same as well – Cisco, NetApp and VMware with their FlexPod solution and EMC with their VBlocks of VMware, Cisco and EMC Storage.

Hail to the Rise of the Specialized Appliance!

HP has a new CEO (again!)

It is past midnight and I can’t sleep. I haven’t been sleeping well lately, so I thought I catch up with some US news. And lo and behold, another big one showed up on Google News.

HP has fired Leo Apotheker and appointed Meg Whitman, the former boss of EBay, to become the new CEO and President of HP. Leo Apotheker was on the job for about 10 months (Damn!). Such actions shake investors confidence and not good for the image of the company. If Leo Apotheker wasn’t the right guy, why take him in the first place?

Leo is responsible for HP’s purchase of Autonomous just a  month ago and now, the HP vision and direction have to be realigned again.

Here’s one of the news from Reuters.

Wait! There’s more confidence shattering news. Excerpt from one of the online news:

 

HP has laid off hundreds of employees in its ill-fated foray 
into the mobile ecosystem. HP is trying to spin off its PC unit
and, at the same time, find a home for its fast deteriorating 
mobile assets, having spent billions of dollars trying to break 
into phones and tablets ($1 billion+ to buy Palm, investments 
in the business and write off of inventory) and then yanking 
the cord approximately 60 days into the adventure.

It is not hard to write not so good news about HP. They keep making such discouraging news on their own.