My first TechFieldDay

[Preamble: I have been invited by  GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are paid by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

I have attended a bunch of Storage Field Days over the years but I have never attended a Tech Field Day. This coming week, I will be attending their 17th edition, TechFieldDay 17, but my first. I have always enjoyed Storage Field Days. Everytime I joined as a delegate, there were new things to discover but almost always, serendipity happened.

Continue reading

The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Continue reading

The reports are out!

It’s another quarter and both Gartner and IDC reports on disk storage market are out.

What does it take to slow down EMC, who is like a behemoth beast mowing down its competition? EMC, has again tops both the charts. IDC Worldwide Disk Storage Tracker for Q1 of 2012 puts EMC at 29.0% of the market share, followed by NetApp at 14.1%, and IBM at 11.4%. In fourth place is HP with 10.2% and HDS is placed fifth with 9.4%.

In the Gartner report, EMC has the lead of 32.5%, followed by NetApp at 12.7% and IBM with 11.0%. HDS held fourth place at 9.5% and HP is fifth with 9.0%. Continue reading

SAP wants to kill Oracle

It’s not new. SAP has been trying to do it for years but with little success. SAP applications and its modules still very much rely on the Oracle database as its core engine but all that that could change within the next few years. SAP has HANA now.

I thought it is befitting to use the movie poster of “Hanna” (albeit an extra “N” in the spelling) to portray SAP who clearly has Oracle in its sights now, with a sharpened arrow head aimed at the jugular of the Oracle beast. (If you haven’t watched the movie, you will see the girl Hanna, using the bow and arrow to hunt a large reindeer).

What is HANA anyway? It was previously an analytics appliance in SAP HANA 1.0SP2. Its key component is the HANA in-memory database (IMDB) and it was not aimed for the general purpose, relational database market yet. Or perhaps, that’s what SAP wants Oracle to believe. Continue reading

Gartner WW ECB 4Q11

The Gartner Worldwide External Controller Based Disk Storage market numbers were out last night, and perennially follows IDC Disk Storage System Tracker.

The numbers posted little surprise, after a topsy-turvy year for vendors like IBM, HP and especially NetApp. Overall, the positions did not change much, but we can see that the 3 vendors I mentioned are facing very challenging waters ahead. Here’s a look at the overall 2011 numbers:

EMC is unstoppable, and gaining 3.6% market share and IBM lost 0.2% market share despite having strong sales with their XIV and StorWize V7000 solutions. This could be due to the lower than expected numbers from their jaded DS-series. IBM needs to ramp up.

HP stayed stagnant, even though their 3PAR numbers have been growing well. They were hit by poor numbers from the EVA (now renumbered as P6000s), and surprisingly their P4000s as well. Looks like they are short-lefthanded (pun intended) and given the C-level upheavals it went through in the past year, things are not looking good for HP.

Meanwhile, Dell is unable to shake off their EMC divorce alimony, losing 0.8% market share. We know that Dell has been pushing very, very hard with their Compellent, EqualLogic, and other technologies they acquired, but somehow things are not working as well yet.

HDS has been the one to watch, with its revenue numbers growing in double digits like NetApp and EMC. Their market share gain was 0.6%, which is very good for HDS standards. NetApp gained 0.8% market share but they seem vulnerable after 2 poor quarters.

The 4th quarter for 2011 numbers are shown below:

I did not blog about IDC QView numbers, which reports the storage software market share but just to give this entry a bit of perspective from a software point of view. From the charts of The Register, EMC has been gaining marketshare at the expense of the rest of the competitors like Symantec, IBM and NetApp.

Tabulated differently, here’s another set of data:

On all fronts, EMC is firing all cylinders. Like a well-oiled V12 engine, EMC is going at it with so much momentum right now. Who is going to stop EMC?

IDC 4Q11 Tracker numbers are in

It was a challenging 2011 but the tremendous growth of data continues to spur the growth of storage. According to IDC in its latest Worldwide Quarterly Disk Storage Systems Tracker, the storage market grew a healthy 7.7% in factory revenues, and the total disk storage capacity shipped was 6,279 petabytes, up 22.4% year-on-year! What Greg Schulz once said was absolutely true. “There is no recession in storage” 

Let’s look at the numbers. Overall, the positions of the storage vendors did not change much, but to me, the more exciting part is the growth quarter over quarter.

EMC and NetApp continue to post double digit growth perennially, with 25.9% and 16.6% respectively. Once again, taking market share from HP and others. HP, with the upheaval that is going on right now throughout the organization, got hit badly with a decline of 3.8%, while IBM held ground of 0.0%. And a data growth of 0.0% when the data growth is at more than 20% is not good, not good at all.

HDS, continuing its momentum with a good story, took a decent 11.6% and a fantastic number from HDS’s perspective. I have been out with my HDS buddies and I can feel their excitement and energy that I have never seen before. And that is a good indicator of the innovation and new technology that is coming out from HDS. They just need to work on their marketing and tell the industry more about what they are doing. Japanese can be so modest.

From the report, 2 things peeved me.

  • IDC reports that NetApp and HP are *tied* at 3rd. This does not make any sense at all! How can they be tied when NetApp has double digit growth, 11.2% market share and a revenue of USD$734 million while HP has negative growth, 10.3% market share and USD$677 million revenue? The logic boggles my mind!
  • They lumped Dell and Oracle into others! And others had a -1.4% growth. I am eager to find out how these 2 companies are doing, especially Dell who has been touting superb growth with their storage story.

Meanwhile, in TOTAL, here’s the table for the Total Worldwide Disk Storage Market share for 4Q11.

Numbers don’t lie. HP and IBM, in both 2nd and 3rd place respectively, are not in good shape. Negative growth in an upward trending market spells more trouble in the long run, and they had better buck up.

In this table, Dell gained and went up to #4 ahead of NetApp and from the look of things, Dell is doing all the right things to make sure that their storage market story is gelling together. Kudos! In fact, NetApp’s position and perception in the last 2-3 quarters have been shaky with Dell and HDS breathing down its neck. There isn’t likely to be significant dent to NetApp by HDS or Dell at this point in time, but having been the “the little engine that could” (that’s what I used to call NetApp) for the last few years, NetApp seems to be losing a bit of the extra “ooomph” that has excited the market in the past.

Lastly, I just want to say that my comments are based on the facts and figures in the tables published by IDC. I remember the last time I commented with the same approach, buddies of mine in the industry disagreed with me, saying that each of them are doing great in the Malaysian or South East Asian market.

Sorry guys, I blog I was see and I welcome you to take me to your sessions to let me know how well you are doing here. I would be glad to write more about it. (Hint, hint).

 

Oracle Bested the Best in Quality

I have been an avid reader of SearchStorage Storage magazine for many years now and have been downloading their free PDF copy every month. Quietly snugged at the end of January 2012’s issue, there it was, the Storage magazine 6th annual Quality Awards for NAS.

I was pleasantly surprised with the results because in the previous annual awards, it would dominated by NetApp and EMC but this time around, a dark horse has emerged. It is Oracle who took top honours in both the Enterprise and the Mid-range categories.

The awards are the result of Storage Magazine’s survey and below is an excerpt about the survey:

 

In both categories covering the Enterprise and the Mid-Range, the overall ratings are shown below:

 

 

Surprised? You bet because I was.

The survey does not focus on speeds and feeds or comparing scalability or performance. Rather, the survey focuses on the qualitative aspects of the NAS products. There were many storage vendors who were part of the participation lists but many did not qualify to be make a dent of what the top 6 did. Here’s a list of the vendors surveyed:

 

The qualitative aspects of the survey focused on 5 main areas:

  • Sales force competency
  • Initial Quality
  • Product Features
  • Product Reliability
  • Technical Support

In each of the 5 main areas, customers were asked a series of questions. Here is a breakdown of those questions of each area.

Sales Force Competency

  1. Are the sales force knowledgeable about their products and their customer’s industries?
  2. How flexible are their sales effort?
  3. How good are they keeping the customer’s interest levels up?

Initial Product Quality

  1. Does the product need little or no vendor intervention?
  2. Ease of installation and ease of use
  3. Good value for money
  4. Reasonable requirement from Professional Service or needing little Professional Service
  5. Installation without defects
  6. Getting it right the first time

Product Features

  1. Storage management features
  2. Mirroring features
  3. Capacity scaling features
  4. Interoperable with other vendor’s products
  5. Remote replication features
  6. Snapshotting features

Product Reliability

  1. Vendor provide comprehensive upgrading procedures
  2. Ability to meet Service Level Agreement (SLA)
  3. Experiences very little downtime
  4. Patches applied non-disruptively

Technical Support

  1. Taking ownership of the customer’s problem
  2. Timely problem resolution and technical advice
  3. Documentation
  4. Vendor supplies support contractually as specified
  5. Vendor’s 3rd party partners are knowledgeable
  6. Vendor provide adequate training

These are some of the intangibles that customers are looking for when they qualify the NAS solutions from vendors. And the surprising was Oracle just became something to be reckoned with, backed by the strong legacy of customer-centric focus of Sun and StorageTek. If this is truly happening in the US, then kudos to Oracle for maximizing the Sun-Storagetek enterprise genes to put their NAS products to be best-of-breed.

However, on the local front, it seems to me that Oracle isn’t doing much justice to the human potential they have inherited from Sun. A little bird has told me that they got rid of some good customer service people in Malaysia and Singapore just last month and more could be on the way in 2012. All this for the sake of meeting some silly key performance indices (KPIs) of being measured by tasks per day.

The Sun people that I know here in Malaysia and Singapore are gurus who has gone through the fire and thrived and there is no substitute for quality. Unfortunately, in Oracle, it’s all about numbers, whether it is sales or tasks per day.

Well, back to the survey. And of course, the final question would be, “Is the product good enough that you would buy it again?” And the results are …

 

Good for Oracle in the US but the results do not fully reflect what’s on the ground here in Malaysia, which is more likely dominated by NetApp, HP, EMC and IBM.

Primary Dedupe where are you?

I am a bit surprised that primary storage deduplication has not taken off in a big way, unlike the times when the buzz of deduplication first came into being about 4 years ago.

When the first deduplication solutions first came out, it was particularly aimed at the backup data space. It is now more popularly known as secondary data deduplication, the technology has reduced the inefficiencies of backup and helped sparked the frenzy of adulation of companies like Data Domain, Exagrid, Sepaton and Quantum a few years ago. The software vendors were not left out either. Symantec, Commvault, and everyone else in town had data deduplication for backup and archiving.

It was no surprise that EMC battled NetApp and finally won the rights to acquire Data Domain for USD$2.4 billion in 2009. Today, in my opinion, the landscape of secondary data deduplication has pretty much settled and matured. Practically everyone has some sort of secondary data deduplication technology or solution in place.

But then the talk of primary data deduplication hardly cause a ripple when compared a few years ago, especially here in Malaysia. Yeah, the IT crowd is pretty fickle that way because most tend to follow the trend of the moment. Last year was Cloud Computing and now the big buzz word is Big Data.

We are here to look at technologies to solve problems, folks, and primary data deduplication technology solutions should be considered in any IT planning. And it is our job as storage networking professionals to continue to advise customers about what is relevant to their business and addressing their pain points.

I get a bit cheesed off that companies like EMC, or HDS continue to spend their marketing dollars on hyping the trends of the moment rather than using some of their funds to promote good technologies such as primary data deduplication that solve real life problems. The same goes for most IT magazines, publications and other communications mediums, rarely giving space to technologies that solves problems on the ground, and just harping on hypes, fuzz and buzz. It gets a bit too ordinary (and mundane) when they are trying too hard to be extraordinary because everyone is basically talking about the same freaking thing at the same time, over and over again. (Hmmm … I think I am speaking off topic now .. I better shut up!)

We are facing an avalanche of data. The other day, the CEO of Nexenta used the word “data tsunami” but whatever terms used do not matter. There is too much data. Secondary data deduplication solved one part of the problem and now it’s time to talk about the other part, which is data in primary storage, hence primary data deduplication.

What is out there?  Who’s doing what in term of primary data deduplication?

NetApp has their A-SIS (now NetApp Dedupe) for years and they are good in my books. They talk to customers about the benefits of deduplication on their FAS filers. (Side note: I am seeing more benefits of using data compression in primary storage but I am not going to there in this entry). EMC has primary data deduplication in their Celerra years ago but they hardly talk much about it. It’s on their VNX as well but again, nobody in EMC ever speak about their primary deduplication feature.

I have always loved Ocarina Networks ECO technology and Dell don’t give much hoot about Ocarina since the acquisition in  2010. The technology surfaced a few months ago in Dell DX6000G Storage Compression Node for its Object Storage Platform, but then again, all Dell talks about is their Fluid Data Architecture from the Compellent division. Hey Dell, you guys are so one-dimensional! Ocarina is a wonderful gem in their jewel case, and yet all their storage guys talk about are Compellent  and EqualLogic.

Moving on … I ought to knock Oracle on the head too. ZFS has great data deduplication technology that is meant for primary data and a couple of years back, Greenbytes took that and made a solution out of it. I don’t follow what Greenbytes is doing nowadays but I do hope that the big wave of primary data deduplication will rise for companies such as Greenbytes to take off in a big way. No thanks to Oracle for ignoring another gem in ZFS and wasting their resources on pre-sales (in Malaysia) and partners (in Malaysia) that hardly know much about the immense power of ZFS.

But an unexpected source coming from Microsoft could help trigger greater interest in primary data deduplication. I have just read that the next version of Windows Server OS will have primary data deduplication integrated into NTFS. The feature will be available in Windows 8 and the architectural view is shown below:

The primary data deduplication in NTFS will be a feature add-on for Windows Server users. It is implemented as a filter driver on a per volume basis, with each volume a complete, self describing unit. It is cluster aware, and fully crash consistent on all operations.

The technology is Microsoft’s own technology, built from scratch and will be working to position Hyper-V as an strong enterprise choice in its battle for the server virtualization space with VMware. Mind you, VMware already has a big, big lead and this is just something that Microsoft must do-or-die to keep Hyper-V playing catch-up. Otherwise, the gap between Microsoft and VMware in the server virtualization space will be even greater.

I don’t have the full details of this but I read that the NTFS primary deduplication chunk sizes will be between 32KB to 128KB and it will be post-processing.

With Microsoft introducing their technology soon, I hope primary data deduplication will get some deserving accolades because I think most companies are really not doing justice to the great technologies that they have in their jewel cases. And I hope Microsoft, with all its marketing savviness and adeptness, will do some justice to a technology that solves real life’s data problems.

I bid you good luck – Primary Data Deduplication! You deserved better.

Phoenix rising from OpenSolaris ashes

I got a little nostalgic over the weekend. As I was working on Solaris 11 x86 over the past few weeks, I got a little bit peeved about how much Oracle has changed the OS.

Command like ifconfig doesn’t not appear to be very functional anymore and instead ipadm has taken over most of the configuration options. And when I working with Jumpstart (damn!), it does not work the way that I know anymore. And now AI (Automated Install) has taken over Jumpstart and I got to relearn the whole what-ca-ma-callit. Dang!

I remembered the day when Solaris x86 first came out in the early 90s. I was ecstatic because I could finally test and run Solaris on x86 platform. I could get things running at home and have fun with it. Drivers were limited then (and still is but has gotten much better) but I was happily hacking away together with other Linux distros as the open source revolution was just beginning. After I joined NetApp, things started to change and I abandoned Solaris in favour of Linux as my job, as well as my interest, were on Linux, especially RedHat. I eventually got my RHCE and completely lost touch with Solaris. By 2005, when OpenSolaris was announced under CDDL (Common Development and Distribution License), I was no longer well versed with the developments of Solaris and OpenSolaris.

Enough about my nostalgia because I am beginning to see a young phoenix (a mythical firebird) rising from the mess of what Oracle did with OpenSolaris! Since Oracle purchased Sun in 2010, Oracle has practically burned OpenSolaris to ashes. On August 13 2010, Oracle announced the end of OpenSolaris in an internal memo and it read:

Solaris Engineering,

Today we are announcing a set of decisions regarding the path to
Solaris 11, and answering key pending questions on open source, open
development, software and binary licenses, and how developers and
early adopters will be able to use Solaris 11 technology before its
release in 2011.

As you all know, the term “OpenSolaris” has been used colloquially to
refer to any or all of a collection of source code, a development
model, a web site, a logo, a binary release, a source license, a
community, and many other related things. So it’s taken a while to go
over each issue from an organizational and business perspective, and
align on the correct next step. Therefore, please take the time to
read all of the detail here carefully. We’ll discuss our strategy
first, and then the decisions and changes to our policies and
processes that implement that strategy.

If you want the entire memo (and all the fa-lah-lah that goes with it), go to Steven Stallion’s blog. Incidentally Steven Stallion was the OpenSolaris kernel developer who leaked the memo into the open.

It became pretty obvious that Oracle business suit culture and “is this going to make money?” ways were suffocating talents and innovations of the Sun engineering tribes. Some of the high profile leavers were James Gosling (father of Java) and Jeff Bonwick (father of RAID-Z and led the ZFS development team in Sun). And there were many top talents exodus within 90-120 days after the Oracle acquisition.

The key technologies that went into OpenSolaris (and Solaris) were slowly but surely deprived of their inventors’ and maintainers nourishment. These technologies were:

  • ZFS (Project Pacific)
  • DTrace
  • Zones (aka Solaris Containers, aka Project Kevlar)
  • Fault Management Architecture (FMA)
  • Service Management Facility (SMF)
  • Advanced Network Virtualization (Project Crossbow)
  • Least-privilege

and many more. Some of these technologies were already open under CDDL license but some were still very much proprietary to Sun (I mean, Oracle). It was difficult to use what was available under OpenSolaris CDDL license to rebuild again, especially when the inventors, talents and maintainers are now all scattered in companies like Delphix, Nexenta, Greenbytes, Joyent and so on .

At the end of last year, shortly before Solaris 11 was announced by Oracle, the people who are passionate about OpenSolaris (and Solaris) have got together in full force again. Dubbed “Project Illumos“, the key people who has developed for Sun convened to build a new open-source, Solaris-based operating environment. The proprietary bits that are closely guarded by Oracle are going to be either rebuilt from scratch or ported from BSD into the last OpenSolaris-kernel before Oracle killed it. That kernel was Solaris Nevada, which was supposed to be the successor of Solaris 10.

The Illumos team already has a bootable and working operating environment and new developments are going on at a frantic pace. From the words of Bryan Cantrill (father of DTrace) and now VP of Engineering at Joyent,

“illumos was not designed to be a fork,but rather an entirely open downstream repository of OpenSolaris”

And the talents congregating to the Illumos project (like moths to a flame) are super-stellar. Just have a look at this list:

  • ZFS –> Matt Ahrens, Eric Schrock,  George Wilson, Adam Leventhal, Bill Pijewski and BrendanGregg
  • SMF –> Dan McDonald and Sumit Gupta
  • DTrace –> Bryan Cantrill, Adam Leventhal, Brendan Gregg, Eric Schrock, Dave Pacheco
  • Zones & Jumpstart –> Jerry Jelinek
  • and many, many more.

KVM (the Linux kernel-based virtual machine) is being added into the Illumos operating environment, giving it the final piece of the puzzle.

I cannot help but to feel extremely proud that OpenSolaris (and Solaris) is not dead yet and it’s alive and rising. Oracle cannot lay claim to the source code and the rights of Illumos (according to Bryan Cantrill) without itself abiding to the CDDL licensing and distribution scheme that it had killed off a year ago.

And this is indeed the young phoenix rising!

More specialized appliances at Oracle OpenWorld

I was reading the news from Oracle OpenWorld and a slew of news about specialized appliances are on the menu.

Oracle added Big Data Appliance and Oracle Exalytics Business Intelligence Machine to its previous numero uno, Exadata Database Machine. EMC, also announced its Green Plum Data Computing Appliance and also its VNX Unified Storage for Oracle.

As quoted

The EMC VNX Unified Storage for Oracle is a VNX system that has 
Oracle installed in a VMware vSphere virtual machine environment. 
The system is meant to unify all Oracle environments--database over 
Oracle Direct NFS, application servers over NFS, and testing and 
development over NFS--resulting in less disk space used and faster 
testing. EMC says this configuration was made because 50% of Oracle 
customers are virtualizing their systems today.

The VNX Unified Storage for Oracle includes EMC's Fully Automated 
Storage Tiering (FAST) technology, which migrates most frequently 
used data between a primary Fibre Channel drive and solid state drives 
and migrates less frequently used data to Serial ATA (SATA) drives and 
its FAST Cache. In an Oracle environment, FAST is well-suited to 
database applications that generate a large number of random 
inputs-outputs, that experience sudden bursts in user query activity, 
or a high number of user loads and where the entire working set can 
be contained in the solid state drive cache.

Based on testing carried out on an Oracle Real Application Clusters 
(RAC) 11g database that was configured to access the VNX7500 file 
storage over the Network File System (NFS), using the Oracle 
Direct NFS (dNFS) client, results showed an 100% improvement in 
transactions per minute (TPM), 170% improvement in IOPS, and 
a 79% decrease in response time, the company said.

As for GreenPlum, EMC quoted:

The company also is showing off the EMC Greenplum Data Computing 
Appliance(DCA) for Big Data Analytics configuration, which provides 
a new migration path to Greenplum for Oracle Data Warehouse. This 
system includes the Greenplum Data Computing Appliance, EMC's 
Global Data Warehouse, and EMC's IT Business Intelligence Grid 
infrastructure. The EMC Greenplum DCA consists of 8 to 16 segment 
servers running Red Hat Enterprise Linux. Each segment server 
contains 96 to 192 processor cores, with 384 GB to 768 GB of 
memory per segment server. The DCA includes 12 600-GB Serial 
Attached SCSI (SAS) 15K RPM drives for a total useable and 
compressed capacity of 73 TB to 144 TB. The DCA competes with 
Oracle's Exadata Database Machine.

In tests performed with this server/storage configuration and a 
15-TB Oracle Data Warehouse, the DCA processed a 99 million rows 
query in less than 28 seconds vs. seven minutes in a traditional 
Oracle environment and data loads decreased from six days to 29 
minutes

It is getting pretty obvious that specialized appliances are making waves at Oracle OpenWorld but what’s more interesting is the return of a combined and integrated environment of compute and storage as I have mentioned in my previous blog. And I forsee that these specialized appliances will be one of the building blocks of cloud computing together with general purposes platforms such as x86, JBODs and the glue to all these, virtualization, notably VMware.