HDS HNAS kicks ass

I am dusting off the cobwebs of my blog. After almost 3 months of inactivity, (and trying to avoid the Social Guidelines Media of my present company), I have bolstered enough energy to start writing again. I am tired, and I am finishing off the previous engagements prior to joining HDS. But I am glad those are coming to an end, with the last job in Beijing next week.

So officially, I will be in HDS as of November 4, 2013 . And to get into my employer’s good books, I think I should start with something that HDS has proved many critics wrong. The notion that HDS is poor with NAS solutions has been dispelled with a recent benchmark report from SPECSfs, especially when it comes to NFS file performance. HDS has never been much of a big shouter about their HNAS, even back in the days of OEM with BlueArc. The gap period after the BlueArc acquisition was also, in my opinion, quiet unless it was the gestation period for this Kick-Ass announcement a couple of weeks ago. Here is one of the news circling in the web, from the ever trusty El-Reg.

HDS has never been big shouting like the guys, like EMC and NetApp, who have plenty of marketing dollars to spend. EMC Isilon and NetApp C-Mode have always touted their mighty SPECSfs numbers, usually with a high number of controllers or nodes behind the benchmarks. More often than not, many readers would probably focus more on the NFSops/sec figures rather than the number of heads required to generate the figures.

Unaware of this HDS announcement, I was already asking myself that question about NFSops/sec per SINGLE controller head. So, on September 26 2013, I did a table comparing some key participants of the SPECSfs2008_nfs.v3 and here is the table:

SPECSfs2008_nfs.v3-26-Sept-2013In the last columns of the 2 halves (which I have highlighted in Red), the NFSops/sec/single controller head numbers are shown. I hope that readers would view the performance numbers more objectively after reading this. Therefore, I let you make your own decisions but ultimately, they are what they are. One should not be over-mesmerized by the super million NFSops/sec until one looks under the hood. Secondly, one should also look at things more holistically such as $/NFSops/sec, $/ORT (overall response time), and $/GB/NFSops/managed and other relevant indicators of the systems sold.

But I do not want to take the thunder away from HDS’ HNAS platforms in this recent benchmark. In summary,

HDS SPECbench summaryTo reach a respectable number of 607,647 NFSops/sec with a sub-second response time is quite incredible. The ORT of 0.59 msecs should not be taken lightly because to eke just about a 0.1 msec is not easy. Therefore, reaching 0.5 millisecond is pretty awesome.

This is my first blog after 3 months. I am glad to be back and hopefully with the monkey off my back (I am referring to my outstanding engagements), I can concentrating on writing good stuff again. I know, I know … I still owe some people some entries. It’s great to be back 🙂

Correcting NCQ incorrect portrayal with SSDs

A kind reader, Baruch Even, has pointed out my ignorance with SATA Native Command Queuing (NCQ) working with Solid State Drives (SSDs) in my previous blog.

In the post, I have haphazardly stated that NCQ was meant for spinning mechanical drives. I was wrong.

NCQ does indeed improve the performance of SSDs using SATA interfaces, and sometimes as much as 15-20%. I know there is a statement in the SATA Wikipedia page that says that NCQ boosted IOPS by 100% but I would take a much more realistic view of things rather than setting the expectations too high.

The typical SSD consists of flash storage spread across multiple chips, which in turn are a bunch of flash packages. Within each of the flash packages, there are different dies (as in manufacturing terminology “die”, not related to the word of “death”) that houses planes (not related to aeroplanes) and subsequently into blocks and pages.

Continue reading

Novell Filr Technology Overview Part 1

I am like a kid opening presents on Christmas mornings today.

Reading and understanding the Novell Filr architecture is exciting with each feature revealing something different, some that may not be entirely unique, but something done simplified. Novell Filr has simplified a few things that are much more appreciated from storage guys like me. Let me share with you this technology learning session.

2 Key Features

First of all, I see the Novell Filr as a Secure Access Broker.

The Novell Filr provides file access, file sharing and file synchronization with multiple mobile devices. The mobility revolution in the likes of smart phones, tablets and other “connected” devices in our personal lives are changing our habits in the way we want information to be accessed, which I can summarize in 2 words – SIMPLE, UNINHIBITED. It is the lack of inhibition that scares the hell out of IT because IT is losing control, and corporations fear data leaks.

Novell Filr lets users access their home directories and network folders from their mobile devices. It lets the users synchronize their files with Windows and MacOS computers, regardless if these devices are internal of the company’s firewalled networks or external of it. Here’s a simple diagram of how Novell Filr defines its position as a Secure Access Broker.

Continue reading

Novell Filr about to be revealed

My training engagement landed me in Manila this week. At the back of my mind is Novell Filr, first revealed to me a week ago by my buddy at Novell Malaysia. After almost 18 months since I first wrote about it, Novell Filr is about to be revealed in my blog within this month. And it has come at an opportune time, because the enterprise BYOD/file synchronization market is about to take off.

Gartner defines this market as Enterprise File Synchronization and Sharing (EFSS) and it is already a very crowded market given the popularity of Dropbox, Box.net, Sugarsync and many, many others. It is definitely a market that is coveted by many but mastered by a few. There are just too many pretenders and too few real players.

The proliferation of smart phones and tablets and other mobile devices has opened up a burgeoning need to have data everywhere. The wonderfulness of having data right at the fingertips every time they are wanted give rise to the need of wanting business and corporate data to be available as well. The power of having data instantly at the swipe of our fingers on the touchscreen is akin us feeling like God, giving life to our communication and us making opportunities come alive at the very moment. Continue reading

Washing too much software defined

There’s been practically a firestorm when EMC announced ViPR, its own version of “software-defined storage” at EMC World last week. Whether you want to call it Virtualization Platform Re-defined or Re-imagined, competitors such as NetApp, HDS, Nexenta have taken pot-shots at EMC, and touting their own version of software-defined storage.

In the release announcement, EMC claimed the following (a cut-&-paste from the announcement):

  • The EMC ViPR Software-Defined Storage Platform uniquely provides the ability to both manage storage infrastructure (Control Plane) and the data residing within that infrastructure (Data Plane).
  • The EMC ViPR Controller leverages existing storage infrastructures for traditional workloads, but provisions new ViPR Object Data Services (with access via Amazon S3 or HDFS APIs) for next-generation workloads. ViPR Object Data Services integrate with OpenStack via Swift and can be run against enterprise or commodity storage.
  • EMC ViPR integrates tightly with VMware’s Software Defined Data Center through industry standard APIs and interoperates with Microsoft and OpenStack.

The separation of the Control Plane and the Data Plane of the ViPR allows the abstraction of 2 main layers.

Layer 1 is the abstraction of the underlying storage hardware infrastructure. Although I don’t have the full details (EMC guys please enlighten me, please!), I believe storage administrator no longer need to carve out LUNs from RAID groups or Storage Pools, striped and sliced them and further provision them into meta file systems before they are exported or shared through NAS protocols. I am , of course, quoting the underlying provisioning architecture of Celerra, which can be quite complex. Anyone who has done manual provisioning with Celerra Manager should know what I mean.

Here’s the provisioning architecture of Celerra:

Continue reading

The big boys better be flash friendly

An interesting article came up in the news this week. The article, from the ever popular The Register, mentioned 3 up and rising storage stars, Nimble Storage, Tintri and Tegile, and their assault on a flash strategy “blind spot” of the big boys, notably EMC and NetApp.

I have known about Nimble Storage and Tintri for a couple of years now, and I did take some time to read up on their storage technology offering. Tegile is new to me when it appeared on my radar after SearchStorage.com announced as the Gold Winner of the enterprise storage category for 2012.

The Register article intriqued me because it implied that these traditional storage vendors such as EMC and NetApp are probably doing a “band-aid” when putting together their flash storage strategy. And typically, I see these strategic concepts introduced by these 2 vendors:

  1. Have a server-side cache strategy by putting a PCIe card on the hosting server
  2. Have a network-based all-flash caching area
  3. Have a PCIe-based flash card on the storage system
  4. Have solid state drives (SSDs) in its disk shelves enclosures

In (1), EMC has VFCache (the server side caching software has been renamed to XtremSW Cache and under repackaging with the Xtrem brand name) and NetApp has it FlashAccel solution. Previously, as I was informed, FlashAccel was using the FusionIO ioTurbine solution but just days ago, NetApp expanded the LSI Nytro WarpDrive into its FlashAccel solution as well. The main objective of a server-side caching strategy using flash is to accelerate mostly read-based I/O operations for specific application workloads at the server side.

Continue reading

Storage Facebook likes

There is a mini revolution going on, and Facebook is the main force driving it.

It is the Open Compute Project (OCP), and its mission is to redesign the modern-day data centers and drive open hardware and architectural designs and specifications, including storage. The overall goals are to drive greater data center efficiency, flexibility, energy savings and cost effectiveness in a new class of “hyperscale” datacenters. Facebook, Google and Amazon are some of the examples of hyperscale datacenters, where their businesses relies on massive computing power, exponential storage performance and racks and racks of computing infrastructure to drive their web-computing or cloud-computing services.

Some of the cool technology innovations in mind includes having systems that support any CPUs from any vendors including Intel and AMD. We may even see both processor brands running on the same motherboard. The Open Common Slots component for processors is based on PCIe. Intel has pledged their Decathlete motherboard specifications for OCP and likewise AMD has produced its Roadrunner mobo series specification for the project as well. The ARM processor could also be supported in the near future in this “mix-and-match” OCP ideals.

Other proposed changes include OpenRack specifications, “sleds”, and of course, the Open Vault project for storage (aka “Knox”). Continue reading

AoE – All about Ethernet!

This is long overdue.

A reader of my blog asked if I could do a piece on Coraid. Coraid who?

This name is probably a name not many people heard of in Malaysia. Even most the storage guys that I talk to never heard of it.

I have known about Coraid for a few years now (thanks to my incessant reading habits), looking at it from nonchalant point of view.  But when the reader asked about Coraid, I contacted Kevin Brown, CEO of Coraid, whom I am not exactly sure how I was connected through LinkedIn. Kevin was very responsive and got one of their Directors to contact me. Kaushik Shirhatti was his name and he was very passionate to share their Coraid technology with me. Thanks Kevin and Kaushik!

That was months ago but the thought of writing this blog post has been lingering. I had to scratch the itch. 😉

So, what’s up with Coraid? I can tell that they are different but seems to me that their entire storage architecture is so simple that it takes a bit of time for even storage guys to wrap their head around it. Why do I say that?

For storage guys (like me), we are used to layers. One of the memorable movie quotes I recalled was from Shrek: “Orges are like onions! Onions have layers!“.

Continue reading

HUS VM is not virtual storage appliance

I was very confused with an recent HDS announcement, and it has been at the back of my mind for several weeks now.

On the last week of September 2012, HDS announced their Hitachi Unified Storage VM, aimed at small/medium enterprises (SMEs). Nothing wrong with that, except the VM part. I am not sure if it was the Computerworld author’s mistake, but he specifically mentioned VM as “virtual machine”. Check out the link here and the screenshot below:

It got me a bit riled up thinking this was some kind of virtual storage ala VMware Virtual Storage Appliance or NetApp ONTAP-V or even the early innovation of HP Lefthand Virtual SAN Appliance. Apparently not!

I did some short investigation and found Nigel Poulton’s blog which gave a fantastic dissection about the HUS VM. The VM is not virtual machine, but Virtual Midrange!

The HUS VM architecture is deep in ASICs, given HDS long history in ASICs design and manufacturing. SiliconFS, is the NAS front end, while the iSCSI and FC part are being serviced from the same HDS microcode of the higher end HDS VSP. Here’s a look at the hardware architectural diagram from Nigel’s blog:

There are plenty of bells and whistles in the HUS VM, armed with plenty of 8Gbps FC ports, SAS 6Gbps backend, SSDs, and software such as Dynamic Provisioning (thin provisioning) and Dynamic Tiering.

Continue reading

Swiss army of data management

Back in 2000, before I joined NetApp, I bought one of my first storage technology books. It was “The Holy Grail of Data Storage Management” by Jon William Toigo. The book served me very well, because it opened up my eyes about the storage networking and data management world.

I mean, I have been doing storage for 7 years before the year 2000, but I was an implementation and support engineer. I installed my first storage arrays in 1993, the trusty but sometimes odd, SPARCstorage Array 1000. These “antiques” were running 0.25Gbps Fibre Channel, and that nationwide bank project gave me my first taste and insights of SAN. Point-to-point, but nonetheless SAN.

Then at Sun from 1997-2000, I was implementing the old Storage Disk Packs with FastWide SCSI, moving on to the A5000 Photons (remember these guys?) and was trained on the A7000, Sun’s acquisition of Encore way back in the late nineties. Then there was “Purple”, the T300s which I believe came from the acquisition of MaxStrat.

The implementation and support experience was good but my world opened up when I joined NetApp in mid-2000. And from the Jon Toigo’s book, I learned one of the most important lessons that I have carried with me till this day – “Data Storage Management is 3x more expensive that the data storage equipment itself“. Given the complexity of the data today compared to the early 2000s, I would say that it is likely to be 4-5x more expensive.

And yet, I am still perplexed that many customers and prospects still cannot see the importance and the gravity of data storage management, and more precisely, data management itself.

A couple of months ago, I had to opportunity to work on an RFP for project in Singapore. The customer had thousands of tapes storing digital media files in addition to tens of TBs running on IBM N-series storage (translated to a NetApp FAS3xxx). They wanted to revamp their architecture, and invited several vendors in Singapore to propose. I was working for a friend, who is an EMC reseller. But when I saw that tapes figured heavily in their environment, and the other resellers were proposing EMC Isilon and NetApp C-Mode, I thought that these resellers were just trying to stuff a square peg into a round hole. They had not addressed the customer’s issues and problems at all, and was just merely proposing storage for the sake of storage capacity. Sure, EMC Isilon is great for the media and entertainment business, but EMC Isilon is not the data management solution for this customer’s situation. Neither was NetApp with the C-Mode solution.

What the customer needed to solve was a data management solution, one that involved

  • Single namespace for video editors and programmers, regardless of online disk storage or archived tape storage
  • Transparent and automated storage tiering and addressing the value of the data to the storage media
  • A backup tier which kept a minimum 2 recent copies for file restoration in case of disasters
  • An archived tier which they could share with their counterparts in other regions
  • A transparent replication tier which would allow them to implement a simplified disaster recovery mechanism with their counterparts in Japan and China

And these were the key issues that needed to be addressed, not the scale-out, usual snapshot mechanism. These features are good for a primary, production storage infrastructure, but this customer’s business operations had about 70-80% data and files which were offline in tapes. I took the liberty to advise my friend to look into Quantum StorNext, because the solution could solve the business problem NOT solving it from an IT point of view. Continue reading