[Preamble: I have been invited by GestaltIT as a delegate to their TechFieldDay from Oct 17-19, 2018 in the Silicon Valley USA. My expenses, travel and accommodation are covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]
There is an argument about NetApp‘s HCI (hyperconverged infrastructure). It is not really a hyperconverged product at all, according to one school of thought. Maybe NetApp is just riding on the hyperconvergence marketing coat tails, and just wanted to be associated to the HCI hot streak. In the same spectrum of argument, Datrium decided to call their technology open convergence, clearly trying not to be related to hyperconvergence.
Hyperconvergence has been enjoying a period of renaissance for a few years now. Leaders like Nutanix, VMware vSAN, Cisco Hyperflex and HPE Simplivity have been dominating the scene, and touting great IT benefits and eliminating IT efficiencies. But in these technologies, performance and capacity are tightly intertwined. That means that in each of the individual hyperconverged nodes, typically starting with a trio of nodes, the processing power and the storage capacity comes together. You have to accept both resources as a node. If you want more processing power, you get the additional storage capacity that comes with that node. If you want more storage capacity, you get more processing power whether you like it or not. This means, you get underutilized resources over time, and definitely not rightsized for the job.
And here in Malaysia, we have seen vendors throw in hyperconverged infrastructure solutions for every single requirement. That was why I wrote a piece about some zealots of hyperconverged solutions 3+ years ago. When you think you have a magical hammer, every problem is a nail. 😉
In my radar, NetApp and Datrium are the only 2 vendors that offer separate nodes for compute processing and storage capacity and still fall within the hyperconverged space. This approach obviously benefits the IT planners and the IT architects, and the customers too because they get what they want for their business. However, the disaggregation of compute processing and storage leads to the argument of whether these 2 companies belong to the hyperconverged infrastructure category.
Today, we see other vendors taking that disaggregation mindset and design to the next level. Early this year, Intel released their Rack Scale Design architecture and just a few months ago, Western Digital announced their OpenFlex architecture. Sandwiched between the 2 announcements, there was the Openstack Cyborg project as well. More about Openstack Cyborg here.
It is clear that the compute, network and storage resources should be composed to the specifications and the requirements of the workload. Of course, I do not want to leave out an early innovator of this design concept, which was HPE Synergy. This Software-Composable-Infrastructure or SCI is gaining strong momentum and is likely to be the future of data centers, where true data processing and data services will prevail.
At the resource management front, a couple of fishes has emerged. Data Management Task Force (DMTF) Redfish framework and API is positioned to manage compute resources while SNIA Swordfish framework and API is on the storage and information infrastructure resources. The diagram below was published by Western Digital to depict both frameworks, and also Openflex’s own API, Kingfish.
There is still much for me to learn but I got a good understanding of one of the early proponents of Software Composable Infrastructure at Drivescale. As a Tech Field Day 17 delegate, I was given a full view of what the Drivescale technology can deliver, and a deep drive of their technology.
In the Drivescale session at Tech Field Day, we know that data deluge would inundate data centers. Present day infrastructure cannot cope with this massive workloads. This includes hyperconverged infrastructure platforms with tightly couple compute, network and storage resources. Tom Lyon, Chief Scientist at Drivescale, in his Arc of Intelligence session, spoke about “liberating the device“. That is precisely what disaggregation is about, liberating the “device” and compositing and assembling the compute, the network and the storage resources into virtual clusters, each to match the burgeoning workloads on a massive scale.
I shared the video of Tom sharing his vision below:
DriveScale The Arc of Intelligence from Stephen Foskett on Vimeo.
I have much to learn about software composable infrastructure technology and the different architectural frameworks and API espoused by many like Drivescale, Western Digital, Intel, Openstack, Netapp, Datrium, Kaminario and more. But one thing is clear.
The future of data center is about disaggregation and composition using software composable infrastructure (SCI). Hyperconvergence may not be the right platform to handle the massive scale of data.
Pingback: Disaggregation or hyperconvergence? - Tech Field Day