Time for Fujitsu Malaysia to twist and shout and yet …

The worldwide storage market is going through unprecedented change as it is making baby steps out of one of the longest recessions in history. We are not exactly out of the woods yet, given the Eurozone crisis, slowing growth in China and the little sputters in the US economy.

Back in early 2012, Fujitsu has shown good signs of taking market share in the enterprise storage but what happened to that? In the last 2 quarters, the server boys in the likes of HP, IBM and Dell storage market share have either shrunk (in the case of HP and Dell) or tanked (as in IBM). I would have expected Fujitsu to continue its impressive run and continue to capture more of the enterprise market, and yet it didn’t. Why?

I was given an Eternus storage technology update by the Fujitsu Malaysia pre-sales team more than a year ago. It has made some significant gains in technology such as Advanced Copy, Remote Copy, Thin Provisioning, and Eco-Mode, but I was unimpressed. The technology features were more like a follower, since every other storage vendor in town already has those features.

Continue reading

It’s all about executing the story

I have been in hibernation mode, with a bit of “writer’s block”.

I woke up in Bangalore in India at 3am, not having adjusted myself to the local timezone. Plenty of things were on my mind but I can’t help thinking about what’s happening in the enterprise storage market after the Gartner Worldwide External Controller-Based report for 4Q12 came out  last night. Below is the consolidated table from Gartner:

Just a few weeks ago, it was IDC with its Worldwide Disk Storage Tracker and below is their table as well:

Continue reading

And Cloud Storage will make us even stranger

It was a dark and stormy night ….

I was in a car with my host in the stifling traffic jams on the streets of Jakarta. We had just finished dinner and his driver was taking me back to the hotel. It was about 9pm and we were making conversation trying to figure out how we can work together. My host, a wonderful Singaporean who has been residing in Jakarta for more than a decade and a half, owns a distributorship focusing mainly on IT security solutions. He had invited me over to Jakarta to give a talk on Cloud Storage at the Indonesia CIO Network event on January 9th 2013.

I was there to represent SNIA South Asia to give a talk about CDMI (Cloud Data Management Interface), and my host also took the opportunity to introduce Nutanix, a SAN-less 2-tier, high-performance, virtualized data center platform. (Note: That’s quite a mouthful, but gotta include all the buzz-words in there). It was my host’s first foray into storage networking solutions, away from his usual security solutions spread. As the conversation went on in the car, he said “You storage guys are so strange!“.

To many of the IT folks who have been involved in OS, applications, security, and networking, to say a few, storage is like a dark art, some mumbo jumbo, voodoo-like science known to a select few. That’s great, because this perception will keep us relevant, and still have the value and a job. To me, that just fine and dandy, and I like it that way. :-)

In preparation to the event, I have to learn up SNIA CDMI. Cloud and Storage … Cloud and Storage … Cloud and Storage. Hmmm …. Continue reading

Is there no one to challenge EMC?

It’s been a busy, busy month for me.

And when the IDC Worldwide Quarterly Disk Storage Systems Tracker for 3Q12 came out last week, I was reading in awe how impressive EMC was at the figures that came out. But most impressive of all is how the storage market continue to grow despite very challenging and uncertain business conditions. With the Eurozone crisis, China experiencing lower economic growth numbers and the uncertainty in the US economic sectors, it is unbelievable that the storage market grew 24.4% y-o-y. And for the first time, 7,104PB was shipped! Yes folks, more than 7 exabytes was shipped during that period!

In the Top 5 external disk storage market based on revenue, only EMC and HDS recorded respectable growth, recording 8.7% and 13.8% respectively. NetApp, my “little engine that could” seems to be running out of steam, earning only 0.9% growth. The rest of the field, IBM and HP, recorded negative growth. Here’s a look at the Top 5 and the rest of the pack:

HP -11% decline is shocking to me, and given the woes after woes that HP has been experiencing, HP has not seen the bottom yet. Let’s hope that the new slew of HP storage products and technologies announced at HP Discover 2012 will lift them up. It also looked like a total rebranding of the HP storage products as well, with a big play on the word “Store”. They have names like StoreOnce, StoreServ, StoreAll, StoreVirtual, StoreEasy and perhaps more coming.

The Open SAN market, which includes iSCSI has EMC again at Number 1, with 29.8%, followed by IBM (14%), HDS (12.2%) and HP (11.8%). When combined with NAS numbers, the NAS + Open SAN market, EMC has 33.5% while NetApp is 13.7%.

Of course, it is just not about external storage because the direct-attached storage numbers count too. With that, the server vendors of IBM, HP and Dell are still placed behind EMC. Here’s a look at that table from IDC:

There’s a highlight of Dell in the table above. Dell actually grew by 4.0% compared to decline in HP and IBM, gaining 0.1%. However, their numbers seem too tepid and led to the exit of Darren Thomas, Dell’s storage group head honco. News of Darren’s exit was on TheRegister.

I also want to note that NAS growth numbers actually outpaced Open SAN numbers including iSCSI.

This leads me to say that there is a dire need for NAS technical and technology expertise in the local storage market. As the adoption of NFSv4 under way and SMB 2.0 and 3.0 coming into the picture, I urge all storage networking professionals who are more pro-SAN to step out of their comfort zone and look into NAS as well. The world is changing and it is no longer SAN vs NAS anymore. And NFSv4.1 is blurring the lines even more with the concepts of layout.

But back to the subject to storage market, is there no one out there challenging EMC in a big way? NetApp was, some years ago, recorded double digit growth and challenging EMC neck-and-neck, but that mantle seems to be taken over by HDS. But both are long way to go to get close to EMC.

Kudos to the EMC team for damn good execution!

The reports are out!

It’s another quarter and both Gartner and IDC reports on disk storage market are out.

What does it take to slow down EMC, who is like a behemoth beast mowing down its competition? EMC, has again tops both the charts. IDC Worldwide Disk Storage Tracker for Q1 of 2012 puts EMC at 29.0% of the market share, followed by NetApp at 14.1%, and IBM at 11.4%. In fourth place is HP with 10.2% and HDS is placed fifth with 9.4%.

In the Gartner report, EMC has the lead of 32.5%, followed by NetApp at 12.7% and IBM with 11.0%. HDS held fourth place at 9.5% and HP is fifth with 9.0%. Continue reading

ARC reactor also caches?

The fictional arc reactor in Iron Man’s suit was the epitome of coolness for us geeks. In the latest edition of Oracle Magazine, Iron Man is on the cover, as well as the other 5 Avengers in a limited edition series (see below).

Just about the same time, I am reading up on the ARC (Adaptive Replacement Caching) that is adopted in ZFS. I am learning in depth of how ZFS caching works as opposed to the more popular LRU (Least Recently Used) caching algorithm that is used in most storage cache memory. Having said that, most storage vendors employed a modified LRU algorithm, with the intention to keep the most recently accessed pages in memory as long as possible. This is true in NetApp’s Data ONTAP (maybe not the ONTAP GX in which I have little experience) and EMC FlareOE. ONTAP goes further to by keeping the most frequently accessed pages permanently in memory. EMC folks would probably refer to most recently accessed as spatial locality while most frequently accessed as temporal locality.

Why is ZFS using ARC and what is ARC? Continue reading

SAP wants to kill Oracle

It’s not new. SAP has been trying to do it for years but with little success. SAP applications and its modules still very much rely on the Oracle database as its core engine but all that that could change within the next few years. SAP has HANA now.

I thought it is befitting to use the movie poster of “Hanna” (albeit an extra “N” in the spelling) to portray SAP who clearly has Oracle in its sights now, with a sharpened arrow head aimed at the jugular of the Oracle beast. (If you haven’t watched the movie, you will see the girl Hanna, using the bow and arrow to hunt a large reindeer).

What is HANA anyway? It was previously an analytics appliance in SAP HANA 1.0SP2. Its key component is the HANA in-memory database (IMDB) and it was not aimed for the general purpose, relational database market yet. Or perhaps, that’s what SAP wants Oracle to believe. Continue reading

Gartner WW ECB 4Q11

The Gartner Worldwide External Controller Based Disk Storage market numbers were out last night, and perennially follows IDC Disk Storage System Tracker.

The numbers posted little surprise, after a topsy-turvy year for vendors like IBM, HP and especially NetApp. Overall, the positions did not change much, but we can see that the 3 vendors I mentioned are facing very challenging waters ahead. Here’s a look at the overall 2011 numbers:

EMC is unstoppable, and gaining 3.6% market share and IBM lost 0.2% market share despite having strong sales with their XIV and StorWize V7000 solutions. This could be due to the lower than expected numbers from their jaded DS-series. IBM needs to ramp up.

HP stayed stagnant, even though their 3PAR numbers have been growing well. They were hit by poor numbers from the EVA (now renumbered as P6000s), and surprisingly their P4000s as well. Looks like they are short-lefthanded (pun intended) and given the C-level upheavals it went through in the past year, things are not looking good for HP.

Meanwhile, Dell is unable to shake off their EMC divorce alimony, losing 0.8% market share. We know that Dell has been pushing very, very hard with their Compellent, EqualLogic, and other technologies they acquired, but somehow things are not working as well yet.

HDS has been the one to watch, with its revenue numbers growing in double digits like NetApp and EMC. Their market share gain was 0.6%, which is very good for HDS standards. NetApp gained 0.8% market share but they seem vulnerable after 2 poor quarters.

The 4th quarter for 2011 numbers are shown below:

I did not blog about IDC QView numbers, which reports the storage software market share but just to give this entry a bit of perspective from a software point of view. From the charts of The Register, EMC has been gaining marketshare at the expense of the rest of the competitors like Symantec, IBM and NetApp.

Tabulated differently, here’s another set of data:

On all fronts, EMC is firing all cylinders. Like a well-oiled V12 engine, EMC is going at it with so much momentum right now. Who is going to stop EMC?

Primary Dedupe where are you?

I am a bit surprised that primary storage deduplication has not taken off in a big way, unlike the times when the buzz of deduplication first came into being about 4 years ago.

When the first deduplication solutions first came out, it was particularly aimed at the backup data space. It is now more popularly known as secondary data deduplication, the technology has reduced the inefficiencies of backup and helped sparked the frenzy of adulation of companies like Data Domain, Exagrid, Sepaton and Quantum a few years ago. The software vendors were not left out either. Symantec, Commvault, and everyone else in town had data deduplication for backup and archiving.

It was no surprise that EMC battled NetApp and finally won the rights to acquire Data Domain for USD$2.4 billion in 2009. Today, in my opinion, the landscape of secondary data deduplication has pretty much settled and matured. Practically everyone has some sort of secondary data deduplication technology or solution in place.

But then the talk of primary data deduplication hardly cause a ripple when compared a few years ago, especially here in Malaysia. Yeah, the IT crowd is pretty fickle that way because most tend to follow the trend of the moment. Last year was Cloud Computing and now the big buzz word is Big Data.

We are here to look at technologies to solve problems, folks, and primary data deduplication technology solutions should be considered in any IT planning. And it is our job as storage networking professionals to continue to advise customers about what is relevant to their business and addressing their pain points.

I get a bit cheesed off that companies like EMC, or HDS continue to spend their marketing dollars on hyping the trends of the moment rather than using some of their funds to promote good technologies such as primary data deduplication that solve real life problems. The same goes for most IT magazines, publications and other communications mediums, rarely giving space to technologies that solves problems on the ground, and just harping on hypes, fuzz and buzz. It gets a bit too ordinary (and mundane) when they are trying too hard to be extraordinary because everyone is basically talking about the same freaking thing at the same time, over and over again. (Hmmm … I think I am speaking off topic now .. I better shut up!)

We are facing an avalanche of data. The other day, the CEO of Nexenta used the word “data tsunami” but whatever terms used do not matter. There is too much data. Secondary data deduplication solved one part of the problem and now it’s time to talk about the other part, which is data in primary storage, hence primary data deduplication.

What is out there?  Who’s doing what in term of primary data deduplication?

NetApp has their A-SIS (now NetApp Dedupe) for years and they are good in my books. They talk to customers about the benefits of deduplication on their FAS filers. (Side note: I am seeing more benefits of using data compression in primary storage but I am not going to there in this entry). EMC has primary data deduplication in their Celerra years ago but they hardly talk much about it. It’s on their VNX as well but again, nobody in EMC ever speak about their primary deduplication feature.

I have always loved Ocarina Networks ECO technology and Dell don’t give much hoot about Ocarina since the acquisition in  2010. The technology surfaced a few months ago in Dell DX6000G Storage Compression Node for its Object Storage Platform, but then again, all Dell talks about is their Fluid Data Architecture from the Compellent division. Hey Dell, you guys are so one-dimensional! Ocarina is a wonderful gem in their jewel case, and yet all their storage guys talk about are Compellent  and EqualLogic.

Moving on … I ought to knock Oracle on the head too. ZFS has great data deduplication technology that is meant for primary data and a couple of years back, Greenbytes took that and made a solution out of it. I don’t follow what Greenbytes is doing nowadays but I do hope that the big wave of primary data deduplication will rise for companies such as Greenbytes to take off in a big way. No thanks to Oracle for ignoring another gem in ZFS and wasting their resources on pre-sales (in Malaysia) and partners (in Malaysia) that hardly know much about the immense power of ZFS.

But an unexpected source coming from Microsoft could help trigger greater interest in primary data deduplication. I have just read that the next version of Windows Server OS will have primary data deduplication integrated into NTFS. The feature will be available in Windows 8 and the architectural view is shown below:

The primary data deduplication in NTFS will be a feature add-on for Windows Server users. It is implemented as a filter driver on a per volume basis, with each volume a complete, self describing unit. It is cluster aware, and fully crash consistent on all operations.

The technology is Microsoft’s own technology, built from scratch and will be working to position Hyper-V as an strong enterprise choice in its battle for the server virtualization space with VMware. Mind you, VMware already has a big, big lead and this is just something that Microsoft must do-or-die to keep Hyper-V playing catch-up. Otherwise, the gap between Microsoft and VMware in the server virtualization space will be even greater.

I don’t have the full details of this but I read that the NTFS primary deduplication chunk sizes will be between 32KB to 128KB and it will be post-processing.

With Microsoft introducing their technology soon, I hope primary data deduplication will get some deserving accolades because I think most companies are really not doing justice to the great technologies that they have in their jewel cases. And I hope Microsoft, with all its marketing savviness and adeptness, will do some justice to a technology that solves real life’s data problems.

I bid you good luck – Primary Data Deduplication! You deserved better.

Phoenix rising from OpenSolaris ashes

I got a little nostalgic over the weekend. As I was working on Solaris 11 x86 over the past few weeks, I got a little bit peeved about how much Oracle has changed the OS.

Command like ifconfig doesn’t not appear to be very functional anymore and instead ipadm has taken over most of the configuration options. And when I working with Jumpstart (damn!), it does not work the way that I know anymore. And now AI (Automated Install) has taken over Jumpstart and I got to relearn the whole what-ca-ma-callit. Dang!

I remembered the day when Solaris x86 first came out in the early 90s. I was ecstatic because I could finally test and run Solaris on x86 platform. I could get things running at home and have fun with it. Drivers were limited then (and still is but has gotten much better) but I was happily hacking away together with other Linux distros as the open source revolution was just beginning. After I joined NetApp, things started to change and I abandoned Solaris in favour of Linux as my job, as well as my interest, were on Linux, especially RedHat. I eventually got my RHCE and completely lost touch with Solaris. By 2005, when OpenSolaris was announced under CDDL (Common Development and Distribution License), I was no longer well versed with the developments of Solaris and OpenSolaris.

Enough about my nostalgia because I am beginning to see a young phoenix (a mythical firebird) rising from the mess of what Oracle did with OpenSolaris! Since Oracle purchased Sun in 2010, Oracle has practically burned OpenSolaris to ashes. On August 13 2010, Oracle announced the end of OpenSolaris in an internal memo and it read:

Solaris Engineering,

Today we are announcing a set of decisions regarding the path to
Solaris 11, and answering key pending questions on open source, open
development, software and binary licenses, and how developers and
early adopters will be able to use Solaris 11 technology before its
release in 2011.

As you all know, the term “OpenSolaris” has been used colloquially to
refer to any or all of a collection of source code, a development
model, a web site, a logo, a binary release, a source license, a
community, and many other related things. So it’s taken a while to go
over each issue from an organizational and business perspective, and
align on the correct next step. Therefore, please take the time to
read all of the detail here carefully. We’ll discuss our strategy
first, and then the decisions and changes to our policies and
processes that implement that strategy.

If you want the entire memo (and all the fa-lah-lah that goes with it), go to Steven Stallion’s blog. Incidentally Steven Stallion was the OpenSolaris kernel developer who leaked the memo into the open.

It became pretty obvious that Oracle business suit culture and “is this going to make money?” ways were suffocating talents and innovations of the Sun engineering tribes. Some of the high profile leavers were James Gosling (father of Java) and Jeff Bonwick (father of RAID-Z and led the ZFS development team in Sun). And there were many top talents exodus within 90-120 days after the Oracle acquisition.

The key technologies that went into OpenSolaris (and Solaris) were slowly but surely deprived of their inventors’ and maintainers nourishment. These technologies were:

  • ZFS (Project Pacific)
  • DTrace
  • Zones (aka Solaris Containers, aka Project Kevlar)
  • Fault Management Architecture (FMA)
  • Service Management Facility (SMF)
  • Advanced Network Virtualization (Project Crossbow)
  • Least-privilege

and many more. Some of these technologies were already open under CDDL license but some were still very much proprietary to Sun (I mean, Oracle). It was difficult to use what was available under OpenSolaris CDDL license to rebuild again, especially when the inventors, talents and maintainers are now all scattered in companies like Delphix, Nexenta, Greenbytes, Joyent and so on .

At the end of last year, shortly before Solaris 11 was announced by Oracle, the people who are passionate about OpenSolaris (and Solaris) have got together in full force again. Dubbed “Project Illumos“, the key people who has developed for Sun convened to build a new open-source, Solaris-based operating environment. The proprietary bits that are closely guarded by Oracle are going to be either rebuilt from scratch or ported from BSD into the last OpenSolaris-kernel before Oracle killed it. That kernel was Solaris Nevada, which was supposed to be the successor of Solaris 10.

The Illumos team already has a bootable and working operating environment and new developments are going on at a frantic pace. From the words of Bryan Cantrill (father of DTrace) and now VP of Engineering at Joyent,

“illumos was not designed to be a fork,but rather an entirely open downstream repository of OpenSolaris”

And the talents congregating to the Illumos project (like moths to a flame) are super-stellar. Just have a look at this list:

  • ZFS –> Matt Ahrens, Eric Schrock,  George Wilson, Adam Leventhal, Bill Pijewski and BrendanGregg
  • SMF –> Dan McDonald and Sumit Gupta
  • DTrace –> Bryan Cantrill, Adam Leventhal, Brendan Gregg, Eric Schrock, Dave Pacheco
  • Zones & Jumpstart –> Jerry Jelinek
  • and many, many more.

KVM (the Linux kernel-based virtual machine) is being added into the Illumos operating environment, giving it the final piece of the puzzle.

I cannot help but to feel extremely proud that OpenSolaris (and Solaris) is not dead yet and it’s alive and rising. Oracle cannot lay claim to the source code and the rights of Illumos (according to Bryan Cantrill) without itself abiding to the CDDL licensing and distribution scheme that it had killed off a year ago.

And this is indeed the young phoenix rising!