Praying to the hypervisor God

I was reading a great article by Frank Denneman about storage intelligence moving up the stack. It was pretty much in line with what I have been observing in the past 18 months or so, about the storage pendulum having swung back to DAS (direct attached storage). To be more precise, the DAS form factor I am referring to are physical server hardware that houses many disk drives.

Like it or not, the hypervisor has become the center of the universe in the IT space. VMware has become the indomitable force in the hypervisor technology, with Microsoft Hyper-V playing catch-up. The seismic shift of these 2 hypervisor technologies are leading storage vendors to place them on to the altar and revering them as deities. The others, with the likes of Xen and KVM, and to lesser extent Solaris Containers aren’t really worth mentioning.

This shift, as the pendulum swings from networked storage back to internal “direct-attached” storage are dictated by 4 main technology factors:

  • The x86 server architecture
  • Software-defined
  • Scale-out architecture
  • Flash-based storage technology

Anyone remember Thumper? Not the Disney character from the Bambi movie!

thumper-bambi-cartoon-character

When the SunFire X4500 (aka Thumper) was first released in (intermission: checking Wiki for the right year) in 2006, I felt that significant wound inflicted in the networked storage industry. Instead of the usual 4-8 hard disk drives in the all the industry servers at the time, the X4500 4U chassis housed 48 hard disk drives. The design and architecture were so astounding to me, I even went and bought a 1U SunFire X4150 for my personal server collection. Such was my adoration for Sun’s technology at the time.

The x86 server architecture of Thumper might have been the blueprint for these type of server designs of the present, because most of the servers that we encounter are quite similar in concept to the original Thumper. Server-SAN (to be different from the original DAS nomenclature) became a possibility. Supermicro, Quanta are some big ODMs (original design manufacturers) serving this market today.

The next point is software-defined. The ability of software technology to abstract and virtualize the physical resources of the underlying hardware is nothing new. But in the storage industry, that “software-defined” openness was never really there, because there were also some proprietary nature and architecture to the enterprise storage solution that made each vendor unique. For example, NetApp’s ONTAP has to work with the NVRAM, and old EMC CLARiiON and CXs were bound by the proprietary LCC (Link Control Cards). These hardware components are proprietary to the respective storage vendor, and are also an integral part of the entire architecture of both storage operating environments.

At the time, circa 2006-2008, I was searching for a mature storage operating system to run on the x86 architecture. I was still working for EMC late 2008, when rumours were rife that I would be retrenched. I was the technology consultant for EMC IP Storage solutions, and at the same time, being the busybody that I was, I was also the technology consultant for the Oil & Gas industry sector (with the endorsement of EMC Asia South senior management) – much to the dislike of my boss at EMC. 😉

After I was retrenched in January of 2009, I found a company that had the Sun ZFS (Zettabyte File System) roots, and I became their reseller. I think I was the first Nexenta reseller in Malaysia, because their VP of Sales, Jon Ash, said I was. For about 1 year, I could not make much headwind with Nexenta, because their capacity tier pricing was not that attractive to Malaysian customers. While Nexenta was a bump in my technology portfolio, OpenIndiana was the saviour. My new partner and I took OpenIndiana and started assembling a decent enterprise storage array with Supermicro server solutions. That took off well, and we did our first sale of 64TB! That deal was less than MYR90K, and immediately, I felt that I have software-defined to thank for.

The software-defined ZFS-based storage operating system of OpenIndiana blended extremely well with the Supermicro x86 server and its internal SAS architecture, and coupled our DTrace analytics scripts and a bit of our own IP, led us to build an enterprise storage array. We had build a freaking enterprise storage array!

As the software-defined storage and file system technologies mature, scale-out became pertinent. Scale-out software architecture allowed the x86 server-storage infrastructure to become a single resource through clustering, and have the ability to pool compute, network and storage resources to serve applications in virtual machines. This further cements the hypervisor to a God-like deity status. VMware, started dishing APIs – VAAI, VASA, VADP – and other new technologies such as VSAN and vVOLs (coming soon), and all storage vendors had to bow and pray to VMware. It is either “be there or be square“! Hyper-V fans could be wishfully thinking this could be their moment too.

But the final piece to finally topples the ivory tower of enterprise networked storage is really flash-based solid state storage. The price of SSDs has declined significant to a point where it is very close to price of the mechanical spinning HDDs. The chart, taken from one of the SANDisk presentations, shows the price decline of both SSDs and HDDs.

sandisk_ssd_trends_1-1024x503

Solid State Storage devices (including 3.5″/2.5″ SSDs, M.2 formats, PCIe-based cards, NVDIMMs) have obliterated the notion that hypervisors will suffer I/O performance bottlenecks in a traditional server-side storage, even with scale-out and software-defined, thereby threatening the burgeoning new technology market.

Fortunately, we can now see that all the other 3 components – x86 server architecture, softwere-defined, scale-out – are also innovating to take advantage of Solid State Storage, coming out with newer specifications and protocols such as M.2 formats, NVMe, SCSIe and several more that have not received full industry endorsement or support.

This prompted Gartner to come up with the Magic Quadrant for integrated systems or hyper-converged systems. Below is the latest Gartner Magic Quadrant released a couple of months ago.

MQ for Integrated Systems

 

And every one of them in the chart, is playing the same game, praying to the same deity-God – The Hypervisor.

About cfheoh

I am a technology blogger with 20+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I was previously the Chairman of SNIA Malaysia until Dec 2012. As of August 2015, I am returning to NetApp to be the Country Manager of Malaysia & Brunei. Given my present position, I am not obligated to write about my employer and its technology, but I am indeed subjected to Social Media Guidelines of the company. Therefore, I would like to make a disclaimer that what I write is my personal opinion, and mine alone. Therefore, I am responsible for what I say and write and this statement indemnify my employer from any damages.
Tagged , , , , , , , , , . Bookmark the permalink.

One Response to Praying to the hypervisor God

  1. scroogie says:

    Very interesting article! I so love the old Suns. The 4500, running ZFS on Solaris, was so crazy great at the time imho, from Hardware to Software. The features of ZFS, the NFS speed, all the little details of the chassis, the ILOM/IPMI, etc. Of course it had its own share of problems, but I think I’m still in love with it. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *