Run free … Symantec FileStore

It has been a rough and tough 3 weeks and I missed writing my blog. Last week, the toughest of the 3, was my CompTIA Storage+ training to Symantec SEs in Malaysia. They were a great crowd, and I loved it but I was really tired after that.

One exciting news during that week was the ouster of long time employee, and CEO of Symantec, Enrique Salem and replacing him with Steve Bennett, their Chairman. The news of that unfortunate event can be read from here and here. And almost hours after that, the calls to break up the Veritas portion of Symantec came up and putting pressure on the board of directors in Symantec to either spin-off the entity or sell it off.

To be fair, many observers, including me, believed that the marriage between Symantec and Veritas in 2005 wasn’t really what you would call a “match made in heaven”. It was more like strange bedfellows to me. And there was an internal joke (one that I could not verify) about the Veritas CEO, Gary Bloom’s promise to the Veritas board when he joined them from Oracle in 2000.

It went like this:

Gary Bloom promised the Veritas board of directors in 2000 that he would be able to bring Veritas to a USD$5 billion dollar company in 5 years time. Nearing the end of the 5 years in 2005, Gary fulfilled his promise by merging with Symantec, instantly making Veritas a USD$5 billion dollar company.”

Note: This is just an inside joke which I heard from a Veritas friend back in 2005, and by no means put Gary Bloom in a bad light. If I did, I apologize.

But back to the present. Our class last week brought up the subject of Symantec FileStore. When it first came out in October 2009, I thought it was an interesting solution. For once, I thought there was something could “out filesystem” NetApp’s ONTAP and WAFL, because Veritas had one of the best scale-out, clustered file systems. They just haven’t figured out the front end protocols yet, where NAS and iSCSI reigned. Veritas File System (VxFS) and Veritas Cluster File System as part of Veritas Cluster Server (VCS) was mature and proven in the enterprise. Along with Veritas Volume Manager (VxVM), this was perhaps THE best file system/volume management suite around. Mind you, ZFS hasn’t reached the level of prominence yet at that time.

Continue reading

Primary Dedupe where are you?

I am a bit surprised that primary storage deduplication has not taken off in a big way, unlike the times when the buzz of deduplication first came into being about 4 years ago.

When the first deduplication solutions first came out, it was particularly aimed at the backup data space. It is now more popularly known as secondary data deduplication, the technology has reduced the inefficiencies of backup and helped sparked the frenzy of adulation of companies like Data Domain, Exagrid, Sepaton and Quantum a few years ago. The software vendors were not left out either. Symantec, Commvault, and everyone else in town had data deduplication for backup and archiving.

It was no surprise that EMC battled NetApp and finally won the rights to acquire Data Domain for USD$2.4 billion in 2009. Today, in my opinion, the landscape of secondary data deduplication has pretty much settled and matured. Practically everyone has some sort of secondary data deduplication technology or solution in place.

But then the talk of primary data deduplication hardly cause a ripple when compared a few years ago, especially here in Malaysia. Yeah, the IT crowd is pretty fickle that way because most tend to follow the trend of the moment. Last year was Cloud Computing and now the big buzz word is Big Data.

We are here to look at technologies to solve problems, folks, and primary data deduplication technology solutions should be considered in any IT planning. And it is our job as storage networking professionals to continue to advise customers about what is relevant to their business and addressing their pain points.

I get a bit cheesed off that companies like EMC, or HDS continue to spend their marketing dollars on hyping the trends of the moment rather than using some of their funds to promote good technologies such as primary data deduplication that solve real life problems. The same goes for most IT magazines, publications and other communications mediums, rarely giving space to technologies that solves problems on the ground, and just harping on hypes, fuzz and buzz. It gets a bit too ordinary (and mundane) when they are trying too hard to be extraordinary because everyone is basically talking about the same freaking thing at the same time, over and over again. (Hmmm … I think I am speaking off topic now .. I better shut up!)

We are facing an avalanche of data. The other day, the CEO of Nexenta used the word “data tsunami” but whatever terms used do not matter. There is too much data. Secondary data deduplication solved one part of the problem and now it’s time to talk about the other part, which is data in primary storage, hence primary data deduplication.

What is out there?  Who’s doing what in term of primary data deduplication?

NetApp has their A-SIS (now NetApp Dedupe) for years and they are good in my books. They talk to customers about the benefits of deduplication on their FAS filers. (Side note: I am seeing more benefits of using data compression in primary storage but I am not going to there in this entry). EMC has primary data deduplication in their Celerra years ago but they hardly talk much about it. It’s on their VNX as well but again, nobody in EMC ever speak about their primary deduplication feature.

I have always loved Ocarina Networks ECO technology and Dell don’t give much hoot about Ocarina since the acquisition in  2010. The technology surfaced a few months ago in Dell DX6000G Storage Compression Node for its Object Storage Platform, but then again, all Dell talks about is their Fluid Data Architecture from the Compellent division. Hey Dell, you guys are so one-dimensional! Ocarina is a wonderful gem in their jewel case, and yet all their storage guys talk about are Compellent  and EqualLogic.

Moving on … I ought to knock Oracle on the head too. ZFS has great data deduplication technology that is meant for primary data and a couple of years back, Greenbytes took that and made a solution out of it. I don’t follow what Greenbytes is doing nowadays but I do hope that the big wave of primary data deduplication will rise for companies such as Greenbytes to take off in a big way. No thanks to Oracle for ignoring another gem in ZFS and wasting their resources on pre-sales (in Malaysia) and partners (in Malaysia) that hardly know much about the immense power of ZFS.

But an unexpected source coming from Microsoft could help trigger greater interest in primary data deduplication. I have just read that the next version of Windows Server OS will have primary data deduplication integrated into NTFS. The feature will be available in Windows 8 and the architectural view is shown below:

The primary data deduplication in NTFS will be a feature add-on for Windows Server users. It is implemented as a filter driver on a per volume basis, with each volume a complete, self describing unit. It is cluster aware, and fully crash consistent on all operations.

The technology is Microsoft’s own technology, built from scratch and will be working to position Hyper-V as an strong enterprise choice in its battle for the server virtualization space with VMware. Mind you, VMware already has a big, big lead and this is just something that Microsoft must do-or-die to keep Hyper-V playing catch-up. Otherwise, the gap between Microsoft and VMware in the server virtualization space will be even greater.

I don’t have the full details of this but I read that the NTFS primary deduplication chunk sizes will be between 32KB to 128KB and it will be post-processing.

With Microsoft introducing their technology soon, I hope primary data deduplication will get some deserving accolades because I think most companies are really not doing justice to the great technologies that they have in their jewel cases. And I hope Microsoft, with all its marketing savviness and adeptness, will do some justice to a technology that solves real life’s data problems.

I bid you good luck – Primary Data Deduplication! You deserved better.

IDC Worldwide Storage Software QView 3Q11

I did not miss this when the IDC report of worldwide storage software for Q3 2011 was released a couple of weeks ago. I was just too busy to work on it until just now.

The IDC QView report covers 7 functional areas of storage software:

  • Data protection and recovery software
  • Storage replication software
  • Storage infrastructure software
  • Storage management software
  • Device management software
  • Data archiving software
  • File system software

All areas are growing and Q3 grew 9.7% when compared with the figures of 3Q2010. In the overall software market, EMC holds the top position at 24.5% followed by Symantec (15.3%) and IBM (14.0%). Here’s a table to show the overall standings of the storage software vendors.

 

In fact, EMC leads in 3 areas of storage infrastructure management, storage management and device management. But the fastest growing area is data archiving software with a pace of 12.2% following by storage and device management of 11.3%.

HP is not in the table, but IDC reported that the biggest growth is coming from HP with a 38.2% growth, boosted by its acquisition of 3PAR. Watch out for HP in the coming quarters. Also worthy of note is the rate Symantec has been experiencing. Their was only 2.2% and IBM, at #3, is catching up fast. I wonder what’s happening in Symantec having seeing them losing their lofty heights in recent years.

The storage software market is a USD$3.5 billion market and it is the market that storage vendors are placing more importance. This market will grow.

No more Huawei-Symantec

Huawei-Symantec is no more. Last night, Huawei, the China telecom giant has bought the remaining 49% of Symantec shares for USD530 million.

The joint venture was initially set up in 2007 to focus on 3S – Server, Storage and Security. With the consolidation under one owner, and one name, Huawei will continue to focus on the 3S. And since Huawei owns telco as well, a lot of the 3S solutions will come into play and it is likely that Huawei will become a cloud player as well.