Don’t get too drunk on Hyper Converged

I hate the fact that I am bursting the big bubble brewing about Hyper Convergence (HC). I urge all to look past the hot air and hype frenzy that are going on, because in the end, the HC platforms have to be aligned and congruent to the organization’s data architecture and business plans.

The announcement of Gartner’s latest Magic Quadrant on Integrated Systems (read hyper convergence) has put Nutanix as the leader of the pack as of August 2015. Clearly, many of us get caught up because it is the “greatest feeling in the world”. However, this faux feeling is not reality because there are many factors that made the pack leaders in the Magic Quadrant (MQ).

Gartner MQ Integrated Systems Aug 2015

First of all, the MQ is about market perception. There is no doubt that the pack leaders in the Leaders Quadrant have earned their right to be there. Each company’s revenue, market share, gross margin, company’s profitability have helped put each as leaders in the pack. However, it is also measured by branding, marketing, market perception and acceptance and other intangible factors.

Secondly, VMware EVO: Rail has split the market when EMC has 3 HC solutions in VCE, ScaleIO and EVO: Rail. Cisco wanted to do their own HC piece in Whiptail (between the 2014 MQ and 2015 MQ reports), and closed down Whiptail when their new CEO came on board. NetApp chose EVO: Rail and also has the ever popular FlexPod. That is why you see that in this latest MQ report, NetApp and Cisco are interpreted independently whereas in last year’s report, it was Cisco/NetApp. Market forces changed, and perception changed.  Continue reading

No Flash in the pan

The storage networking market now is teeming with flash solutions. Consumers are probably sick to their stomach getting a better insight which flash solution they should be considering. There are so much hype, fuzz and buzz and like a swarm of bees, in the chaos of the moment, there is actually a calm and discerning pattern slowly, but surely, emerging. Storage networking guys would probably know this thing well, but for the benefit of the other readers, how we view flash (and other solid state storage) becomes clear with the picture below: Flash performance gap

(picture courtesy of  http://electronicdesign.com/memory/evolution-solid-state-storage-enterprise-servers)

Right at the top, we have the CPU/Memory complex (labelled as Processor). Our applications, albeit bytes and pieces of them, run in this CPU/Memory complex.

Therefore, we can see Pattern #1 showing up. Continue reading

VMware in step 1 breaking big 6 hegemony

Happy Lunar New Year! This is the Year of the Water Snake, which just commenced 3 days ago.

I have always maintain that VMware has to power to become a storage killer. I mentioned that it was a silent storage killer in my blog post many moons ago.

And this week, VMware is not so silent anymore. Earlier this week, VMware had just acquired Virsto, a storage hypervisor technology company. News of the acquisition are plentiful on the web and can be found here and here. VMware is seriously pursuing its “Software-Defined Data Center (SDDC)” agenda and having completed its software-defined networking component with the acquisition of Nicira back in July 2012, the acquisition of Virsto represents another bedrock component of SDDC, software-defined storage.

Who is Virsto and what do they do? Well, in a nutshell, they abstract the underlying storage architecture and presents a single, global namespace for storage, a big storage pool for VM datastores. I got to know about their presence last year, when I was researching on the topic of storage virtualization.

I was looking at Datacore first, because I was familiar with Datacore. I got to know Roni Putra, Datacore’s CTO, through a mutual friend, when he was back in Malaysia. There was a sense of pride knowing that Roni is a Malaysian. That was back in 2004. But Datacore isn’t the only player in the game, because the market is teeming with folks like Tintri, Nutanix, IBM, HDS and many more. It just so happens that Virsto has caught the eye of VMware as it embarks its first high-profile step (the one that VMware actually steps on the toes of the Storage Big 6 literally) into the storage game. The Big 6 are EMC, NetApp, IBM, HP, HDS and Dell (maybe I should include Fujitsu as well, since it has been taking market share of late)

Virsto installs as a VSA (virtual storage appliance) into ESXi, and in version 2.0, it plugs right in as an almost-native feature of ESXi, not a vCenter tab like most other storage. It looks and feels very much like a vSphere functionality and this blurs the lines of storage and VM management. To the vSphere administrator, the only time it needs to be involved in storage administration is when he/she is provisioning storage or expanding it. Those are the only 2 common “touch-points” that a vSphere administrator has to deal with storage. This, therefore, simplifies the administration and management job.

Here’s a look at the Virsto Storage Hypervisor architecture (credits to Google Images):

What Virsto does, as I understand from high-level, is to take any commodity storage and provides a virtual storage layer and consolidate them into a very large storage pool. The storage pool is called vSpace (previously known as LiveSpace?) and “allocates” Virsto vDisks to each VMs. Each Visto vDisk will look like a native zeroed thick VMDK, with the space efficiency of Linked Clones, but without the performance penalty of provisioning them.  The Virsto vDisks are presented as NFS exports to each VM.

Another important component is the asynchronous write to Virsto vLogs. This is configured at the deployment stage, and this is basically a software-based write cache, quickly acknowledging all writes for write optimization and in the background, asynchronously de-staged to the vSpace. Obviously it will have its own “secret sauce” to optimize the writes.

Within the vSpace, as disk clone groups internal to the Virsto, storage related features such as tiering, thin provisioning, cloning and snapshots are part and parcel of it. Other strong features of Virsto are its workflow wizard in storage provisioning, and its intuitive built-in performance and management console.

As with most technology acquisitions, the company will eventually come to a fork where they have to decide which way to go. VMware has experienced it before with its Nicira acquisition. It had to decide between VxLAN (an IETF standard popularized by Cisco) or Nicira’s own STT (Stateless Transport Tunneling). There is no clear winner because choosing one over the other will have its rewards and losses.

Likewise, the Virsto acquisition will have to be packaged in a friendly manner by VMware. It does not want to step on all toes of its storage Big 6 partners (yet). It still has to abide to some industry “co-opetition” game rules but it has started the ball rolling.

And I see that 2 critical disruptive points about this acquisition in this:

  1. It has endorsed the software-defined storage/storage hypervisor/storage virtualization technology and started the commodity storage hardware technology wave. This could the beginning of the end of proprietary storage hardware. This is also helped by other factors such as the Open Compute Project by Facebook. Read my blog post here.
  2. It is pushing VMware into a monopoly ala-Microsoft of the yesteryear. But this time around, Microsoft Hyper-V could be the benefactor of the VMware agenda. No wonder VMware needs to restructure and streamline its business. News of VMware laying off about 900 staff can be read here. Its unfavourable news of its shares going down can be read here.

I am sure the Storage Big 6 is on the alert and is probably already building other technology and partnerships beyond VMware. It the natural thing to do but there is no stopping VMware if it wants to step on the Big 6 toes now!

APIs that stick in Storage

The competition in storage networking and data management is forever going to get fiercer. And there is always going to be the question of either having open standards APIs or proprietary APIs because storage networking and data management technologies constantly have to balance between gaining a competitive advantage with proprietary APIs  or getting greater market acceptance with open standards APIs.

The flip side, is having proprietary APIs could limit and stunt the growth of the solution but with much better integration and interoperability with complementary solutions. Open standards APIs could make the entire market a plain, vanilla one where there is little difference between technology A or B or C or X, and in the long run, could give lesser incentive for technology innovation.

I am not an API guy. I do not code or do development work on APIs, but I do like APIs (Application Programming Interface). I have my fair share of APIs which can be considered open or proprietary depending on who you talk to. My understanding is that an API might be more open if there are many ISVs, developers and industry supporters endorsing it and have a valid (and usually profit-related) agenda to make the API open.

I can share some work experience with some APIs I have either worked in the past or give my views of some present cool APIs that are related to storage networking and data management.

One of the API-related works I did was with the EMC Centera. I was working with Schlumberger to create a file-level archiving/lifecycle management solution for the GeoFrame seismic files with the EMC Centera. This was back in 2008.

EMC Centera does not present itself as a NAS box (even though I believe, IDC lumps Centera sales numbers to worldwide NAS market figures, unless I am no longer correct chronologically) but rather through ISVs and application-level integration with the EMC Centera API. Here’s a high-level look of how the EMC Centera talks to application with the API.

Note: EMC Centera can also present a NAS integration interface through NFS, CIFS, HTTP and FTP protocols, but the customer must involve (may have to purchase) the EMC Centera Universal Access software appliance. This is for applications that do not have the level of development and integration to interface with the EMC Centera API. 

Continue reading

The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Continue reading

NFS-phobic in Malaysia

I taught the EMC Cloud Infrastructure and Services (CIS) class last week and naturally, a few students came from the VMware space. I asked how they were implementing their storage and everyone said Fibre Channel.

I have spoken to a lot of people about this as well in the past, whether they are using SAN or NAS storage for VMware environments. And almost 99% would say SAN, either FC-SAN or iSCSI-based SAN. Why???

When I ask these people about deploying NFS, the usual reply would be related to performance.

NFS version 3 won the file sharing protocol race during its early days where Unix variants were prevalent, but no thanks to the Balkanization of Unices in the 90s. Furthermore, NFS lost quite a bit of ground between NFSv3 in 1995 and the coming out party of NFSv4.1 just 2 years ago. The in-between years were barren and NFS become quite a bit of a joke with “Need For Speed” or “No F*king Security“. That also could be a contributing factor to the NFS-phobia we see here in Malaysia.

I have experiences with both SAN and NAS and understood the respective protocols of Fibre Channel, iSCSI, NFS and CIFS, and I felt that NFS has been given unfair treatment by people in this country. For the uninformed, NFS is the only NAS protocol supported by VMware. CIFS, the Windows file sharing protocol, is not supported, probably for performance and latency reasons. However, if you catch up with high performance computing (HPC), clustering, or MPP (Massively Parallel Processing) resources, almost always you will read about NFS being involved in delivering very high performance I/O. So, why isn’t NFS proposed with confidence in VMware environments?

I have blogged about this before. And I want to use my blog today to reassert what I believe in and hope that more consideration can be given to NFS when it comes to performance, even for virtualized environments.

NFS performance is competitive when compared to Fibre Channel and in a lot of cases, better than iSCSI. It is just that the perception of poor performance in NFS is stuck in people’s mind and it is hard to change that. However, there are multiple credible sources that stated that NFS is comparable to Fibre Channel. Let me share with you one of the source that compared NFS with other transport protocols:

From the 2 graphs of IOPS and Latency, NFS fares well against other more popular transport protocols in VMware environments. Those NFS performance numbers, are probably not RDMA driven as well. Otherwise RDMA could very well boost the NFS numbers into even higher ground.

What is this RDMA (Remote Direct Memory Access)? RDMA is already making its presence felt quietly, and being used with transports like Infiniband and 10 Gigabit Ethernet. In fact, Oracle Solaris version 11 will use RDMA as the default transmission protocol whenever there is a presence of RDMA-enable NICs in the system. The diagram below shows where RDMA fits in in the network stack.

RDMA eliminates the need for the OS to participate in the delivery of data, and directly depositing the data from the initiator’s memory to the target’s memory. This eliminates traditional networking overheads such as buffers copying and setting up network data structures for the delivery. A little comparison of RDMA with traditional networking is shown below:

I was trying to find out how prevalent NFS was in supporting the fastest supercomputers in the world from the Top500 Supercomputing sites. I did not find details of NFS being used, but what I found was the Top500 supercomputers do not employ Fibre Channel SAN at all!  Most have either proprietary interconnects with some on Infiniband and 10 Gigabit Ethernet. I would presume that NFS would figure in most of them, and I am confident that NFS can be a protocol of choice for high performance environments, and even VMware environments.

The future looks bright for NFSv4. We are beginning to see the word of “parallel NFS (pNFS)” being thrown into conversations around here, and the awareness is there. NFS version 4.2 is just around the corner as well, promising greater enhancement to the protocol.

 

Is Dell Fluid Enough?

Dell made a huge splash 2 weeks ago in London in their inaugural Dell Storage Forum. They dubbed their storage and management lineup as “Fluid Data Architecture” offering the ability for customers to quickly adapt and automate their business when it comes to storage networking and more importantly, data management.

In the London show, they showcased several key innovations and product development. Here’s a list of their jewels:

  • DR4000 – an inline, content optimized backup deduplication appliance (based on the acquired technology of Ocarina Networks)
  • Compellent Storage Center 6.0 – a major software release
  • Compellent key technology integration with VMware
  • Optimized object storage for Microsoft Sharepoint with the DX6000 Object Storage Platform – DX6000 is an OEM from Caringo
  • Broader support for Dell Force10, PowerConnect and their partner’s Brocade

The technology from Ocarina Networks is fantastic technology and I have always admired Ocarina. I have written about Ocarina in the past in my previous blog. But I was a bit perplexed why Dell chose to enter the secondary dedupe market with a backup dedupe appliance in the DR4000. They are already a latecomer into the secondary deduplication game and I thought HP was already late with their StoreOnce.

They could have used Ocarina’s technology to trailblaze the primary deduplication market. In my previous blog, I mentioned that primary deduplication hasn’t really taken off in a big way, and Dell with the technology from Ocarina could set the standard and establish themselves as the leader of the primary deduplication market space. I was disappointed that they didn’t, not just yet.

The Compellent Storage Center 6.0 release was a major release and it was, for better or for worse, coincided with the departure of Phil Soran, the founder and CEO of Compellent. Phil felt that he can let his baby go and Dell is certainly making the best of what they can do with Compellent as their flagship data storage product.

The major release included 64-bit support for greater performance and scalability and also include several key VMware technologies that other vendors already have. The technologies included:

  • VMware vStorage API for Array Integration (VAAI)
  • Storage Replication Adapter plug-in for VMware Site Recovery Manager (SRM)
  • VSphere 5 client plug-in
  • Integration of Enterprise Manager and VSphere

Other storage related releases (I am not going to talk about Force10 or their PowerConnect solutions here) included Dell offering 16Gbps FibreChannel switches from Brocade and also their DX6000 Object Storage Platform optimized for Microsoft Sharepoint.

I think it is fantastic that Dell is adapting and evolving into a business-oriented, enterprise solution provider and their acquisitions in the past 3 years – EqualLogic, Exanet, Ocarina Networks, Force10 and Compellent – proves that Dell aims to take market share in the storage networking and data management market. They have key initiatives with CommVault, Symantec, VMware and Microsoft as well. And Michael Dell is becoming quite a celebrity lately, giving Dell the boost it needs to battle in this market.

But the question is, “Is their Fluid Data Architecture” fluid enough?” If I were a customer, would I bite?

As a customer, I look for completeness in the total solution, and I cannot fault Dell for having most of the pieces in the solution stack. They have networking in their PowerConnect, Force10 and Brocade. They have SAN in both Compellent and EqualLogic but their unified storage story is still a bit lacking. That’s because we have not seen Dell’s NAS storage yet. Exanet was a scale-out NAS and we have seen little rah-rah about this product.

From a data management perspective, their data protection story gels well with the Commvault and Symantec partnership, but I feel that Dell sales and SEs (at least in Malaysia) spends too much time touting the Compellent Automated Storage Tiering. I have spoken to folks who have listened to Dell guys’ pitches and it’s too one-dimensional. It’s always about storage tiering and little else about other Compellent technology.

At this point of time, the story that Dell sells here in Malaysia is still disjointed, but they are getting better. And eventually, the fluidity (pun intended ;-)) of their Fluid Data Architecture will soon improve.

How will Dell fare in 2012? They had taken a beating in the past 2 IDC’s quarter storage market tracker, losing some percentage points in market share but I think Dell will continue to tinker to get it right.

2012 will be their watershed year.

Why VAAI?

This is Part 2 of my previous blog about VAAI (vStorage API for Array Integration) with more details about VAAI. VAAI offloads some of the I/O related functions to the VAAI-enable storage array, hence giving the hypervisor more compute and memory resource to do it other functions. And the storage array, upon receiving the VAAI command, will execute whatever that is required of it.

Why is VAAI important? What does it do that makes it so useful and important to the hypervisor?

VAAI is about a set of new SCSI commands. And there are 3 important ones:

  • WRITE-SAME
  • XSET
  • ATS

What exactly do these SCSI commands do?

WRITE-SAME is a SCSI command that instructs the storage array to zeroes the virtual VMDK disks or VMFS LUNs. This usually happens when guest OS require a brand new set of virtual disks and initializing the virtual disks is required. In the past (before VAAI), the hypervisor has to repetitively send 0s to the storage to perform the disks zeroing. As shown in the diagram below, you can see that each zero operation is sent from the hypervisor to the storage.

This back-and-forth of sending 0s and acknowledgments between the hypervisor and the storage is not efficient. With VAAI, the command WRITE-SAME  is sent from the hypervisor to the storage array and the storage array will do the zeroing on the disks and LUNs. The hypervisor will not intervene with the process until it gets and acknowledgment of its completion. See diagram below of how VAAI helps in bulk-zeroing of disks and LUNs in the storage array.

The animated GIFs are the taken from Luke Reed’s blog, a fantastic read.

The second VAAI SCSI command is XSET and it performs hardware accelerated full copy. This command is also known as  XCOPY and it offloads the process of copying the blocks of data that makeup a VMDK file. Such copying operations occur when the hypervisor is doing things like VM cloningStorage vMotion or VM creation from templates (bulk copying to create many similar VMs in one go).

Again with the courtesy of Luke Reed’s animated GIFs, the diagram below shows a full copy without VAAI

and after implementing VAAI, where the full, bulk copy operations is offloaded to the storage array to execute.

The third and last SCSI command of VAAI is ATS or hardware-assisted locking. ATS stands for Atomic, Test and Set and the command allow the hypervisor to lock only the required blocks rather than the entire LUN.

Without VAAI, the entire LUN temporarily could be locked by the numerous VMFS operations of one single hypervisor and this prevents other hypervisors from accessing the shared LUNs. The ATS API offloads lock management from the host to the storage array and keeps the LUN available by locking only required blocks, not the entire VMFS file system. Please see the pleasing diagrams below of

(without VAAI ATS)

(with VAAI ATS)

And if you want to see the VAAI Hardware Accelerated Full Copy (aka XSET) in action, here’s a little video showing how it is done in an EMC environment.

The primary significance and noticeable benefit is definitely performance. The secondary benefit, though not so obvious, is allowing VMware and its hypervisor to scale because it does not get bogged down by some of the I/O functions that it is not meant to do.

There were some new additions in vSphere 5.0 for VAAI. From its FAQ, it listed in ESX5.0, support for NAS Hardware Acceleration is included with support for the following primitives:

  • Full File Clone – Like the Full Copy VAAI primitive provided for block arrays, this Full File Clone primitive enables virtual disks to be cloned by the NAS device.
  • Native Snapshot Support – Allows creation of virtual machine snapshots to be offloaded the array.
  • Extended Statistics – Enables visibility to space usage on NAS datastores and is useful for Thin Provisioning.
  • Reserve Space – Enables creation of thick virtual disk files on NAS.

So, there you have it folks. Why VAAI? Here’s why.

VAAI to go!

First of all, let me apologize. I am guilty of not updating my blogs as regularly as I did in the past. Things got a bit crazy after Christmas and I had to juggle several things that demand more of my attention but I am confident things will sort itself out soon enough.

Today’s topic is about VMware’s VAAI (vSphere vStorage API for Array Integration). This feature was announced more than 3 years ago but was only introduced in vSphere 4.1 July 2010 and now with newer enhancements in the latest release of vSphere 5.0.

What is this VAAI and what does this mean from a storage perspective?

When VMware came into prominence in version 3.0/3.5 time, the whole world revolved around the ESX hypervisor. It tried to do everything on its own, in its own proprietary nature. Given its nascent existence then, ESX had to do what it had to do and control everything with its hypervisor universe. Yes, it was a good move then and it did what it was supposed to do. This was back when server virtualization was in its infancy, and resources requirements were less demanding.

Hence when VMware wants to initialize VMs, or create VMDK files on the datastore, or creating clones or snapshots, or even executing VMotion and Storage VMotion, it tends to execute it at the hypervisor level. For example, when creating virtual disks with VMFS, most of the commands to initialization of the disks were done at the VMFS level. Zeroing the virtual disks would mean sending zeroing commands to the actual physical disks on the shared storage. And this would go on back and forth, taxing the CPU cycles and memory on the hypervisor layer, and sending wasteful and unnecessary zeroes over the network to the storage array. This was very inefficient, wasteful and degrades the performance tremendously, especially at the hypervisor layer (compute and memory).

There are also other operations such as virtual disks locking that locks up the entire LUN that housed several datastores. Again, not good.

But VMware took off like a rocket, and quickly established itself as a Tier 1, enterprise server virtualization solution addressing the highest demands of the enterprise. It is also defining the future of Cloud Computing, building exorbitant requirements as it pushes forward. And VMware began to realize that if the hypervisor is to scale, it needs to leave the I/O operations to the “experts”, and the “experts” here being the respective storage array itself.

So, in version 4.1, VAAI (vStorage API for Array Integration) was introduced as an API suite, following 3 other earlier APIs – vStorage API for Site Recovery Manager (SRM), vStorage API for Data Protection and vStorage API for Multipathing.

In a nutshell, as I have mentioned before, VAAI offloads I/O and storage related operations to the VAAI-capable storage array (leave it to the experts) as shown in the diagram below:

 

Of course, the storage vendors themselves has to rework their array OS layer to integrate with the VAAI API. You can say that the VAAI are “hooks” that enhances the storage connectivity and communications with vSphere’s hypervisor. But then again, if you look at it from the other angle, vSphere need the storage vendors more in order for its universe to scale. Good thing VMware has a big, big market share. Imagine if there are no takers for the VAAI APIs. That would be a strange predicament instead.

What is the big deal that we get from VAAI? The significant and noticeable benefit is increase performance. By offloading the I/O functionality and operations to the storage array itself, the hypervisor and the compute and memory resource are not bogged down, resulting in higher performance and better response time to serve its VMs and other VM operations.

I am going off to another meeting and I shall write of VAAI in more details later. Until the next entry, adios and have a great year ahead.

NetApp to buy Commvault?

The rumour mill is going again that Commvault is an acquisition target, and this time, NetApp. The rumour is not new but someone Commvault has gotten too big in the past couple of years to be swallowed up. But this time, it could happen as NetApp is hungry, …. very hungry.

NetApp took a big hit a couple of weeks back, when it announced its Q3 numbers. Revenues fell short of analysts expectations and the share price took a big hit. While its big rival, EMC, has been gaining much momentum on all fronts, it appears that NetApp is getting overwhelmed by the one-stop-shop of EMC. EMC is everything to everyone who wants storage, data protection software, services, data management, scale-out, data security, big data, cloud storage and virtualization and much more. NetApp, has been very focused on what they do best, and that is storage. Everything evolves around their crown jewel, Data ONTAP and recently added Engenio to their stable of storage solutions.

NetApp does not mix the FAS storage with the Engenio and making sure that their story-telling gels but in the past few years, many other vendors are taking the “one-stack-fits-all” approach. Oracle have Exadata, where servers, storage, database and networking in all-in-one. Many others are doing the same, while NetApp prefers a more “loose-coupled” partnerships, such as their “Imagine Virtually Anything” concept partnership with VMware and Cisco, in the shape of FlexPod. FlexPod is a flexible infrastructure package comprising presized storage, networking and server components designed to ease the IT transformation journey–from virtualization all the way to cloud computing.

Commvault would be a great buy (going to be very expensive buy) for NetApp. Things fits perfectly if NetApp decides to abandon its overly protective shield and start becoming a “one-stop-shop” to its customers, starting with data protection. Commvault is already the market leader in the Enterprise Disk-based Backup and Recovery market, and well reflected in Gartner’s Magic Quadrant January 2011 report.

It’s amazing to see how Commvault got to become the leader in this space in just a few short years, and part of its unique approach is providing a common core engine called the Common Technology Engine (CTE). The singular core architecture allows different data management components – Backup, Replication, Archiving, Resource Management and Classification & Search – to share resource and more importantly detailed knowledge of true data management.

In the middle of this year, NetApp had an OEM deal with Commvault to resell their SnapProtect solution, which integrates with NetApp’s SnapMirror solution. The SnapProtect manages NetApp snapshots and SnapMirror replications and also enhances the solution as a tape-out for SnapMirror. Below shows how the Commvault SnapProtect fits into NetApp’s snapshots and SnapMirror data protection architecture.

 

Sources of NetApp’s C-Level said that NetApp is still very much focused on their ONTAP strategy and with their “loosely-coupled” partnerships with key partners like VMware, Cisco, F5 and Quantum. But at the back of NetApp’s mind, I believe, it is time to do something about it. This “focused” (also could be interpreted an overly cautious) approach is probably seeing the last leg of its phase as cloud computing is changing all that. The cost of integration of different, yet flexible components of storage, data protection and data management components, is prohibitive to cloud service providers and NetApp must take a bolder approach to win the hearts of these providers. Having a one-stop-shop isn’t so bad anymore; it is beginning to make sense and NetApp had better do something quick. Commvault is one of the best out there and NetApp shouldn’t lose that chance.

Note: While the rumours of NetApp and Commvault are swirling, there’s been rumours that Quantum could be another NetApp target.