A conceptual distributed enterprise HCI with open source software

Cloud computing has changed everything, at least at the infrastructure level. Kubernetes is changing everything as well, at the application level. Enterprises are attracted by tenets of cloud computing and thus, cloud adoption has escalated. But it does not have to be a zero-sum game. Hybrid computing can give enterprises a balanced choice, and they can take advantage of the best of both worlds.

Open Source has changed everything too because organizations now has a choice to balance their costs and expenditures with top enterprise-grade software. The challenge is what can organizations do to put these pieces together using open source software? Integration of open source infrastructure software and applications can be complex and costly.

The next version of HCI

Hyperconverged Infrastructure (HCI) also changed the game. Integration of compute, network and storage became easier, more seamless and less costly when HCI entered the market. Wrapped with a single control plane, the HCI management component can orchestrate VM (virtual machine) resources without much friction. That was HCI 1.0.

But HCI 1.0 was challenged, because several key components of its architecture were based on DAS (direct attached) storage. Scaling storage from a capacity point of view was limited by storage components attached to the HCI architecture. Some storage vendors decided to be creative and created dHCI (disaggregated HCI). If you break down the components one by one, in my opinion, dHCI is just a SAN (storage area network) to HCI. Maybe this should be HCI 1.5.

A new version of an HCI architecture is swimming in as Angelfish

Kubernetes came into the HCI picture in recent years. Without the weights and dependencies of VMs and DAS at the HCI server layer, lightweight containers orchestrated, mostly by, Kubernetes, made distribution of compute easier. From on-premises to cloud and in between, compute resources can easily spun up or down anywhere.

Data has gravity, and data is heavily anchored to the repositories where it is stored. Instead of building HCI 1.0 from the compute viewpoint where storage is difficult to scale, several storage vendors (obviously) are using the compute processing power at the storage layer to run VMs with a built-in hypervisor. Case in point, is Dell® PowerStore AppsON with a VMware® ESXi as the hypervisor and vCenter as the resource orchestrator.

Open Source has taken this HCI architecture another step further. Instead of VMs (which are heavyweights), containers and Kubernetes are the replacement components in a new, burgeoning HCI architecture. Even better, Kubernetes pods can be deployed as a more complete and more cohesive application ecosystem. iXsystems™ TrueNAS® SCALE has been in the works for over a year, garnering massive interests (and testings) in the open source community, and inching closer and closer to a General Availability date in quarter 1 2022 this year.

This is the next version of the HCI architecture. Applications and workloads running on containers, orchestrated by Kubernetes, scaling to multiple petabytes of storage, without the limitations of HCI 1.0. Unlike proprietary HCI platforms, open source infrastructure is changing the HCI landscape again.

Edge to Core to Cloud Architecture (An Open Source concept)

[ Note: This is a concept. Yet to be tested but entirely plausible ]

Up to this point, everything I wrote is real. Up to this point, I have also highlighted the word “Integration” several times.

Bringing all these together in the most frictionless way is not easy. Designing a distributed applications and workloads architecture that spans across different premises with the organization, but the open sources software pieces (with the enterprise mindset) are starting to fall into place. The objective is to have the frictionless integration as much as possible.

The pièce de résistance is the TrueNAS® SCALE HCI platform. Open source, it houses a catalog of container-based apps (extensible with community apps via TrueCharts), packaged as Helm Charts. Using the distributed power of MinIO object storage with native Kubernetes integration, and Red Hat® OpenShift integrating seamlessly with Kubernetes orchestration at the applications and workloads layer, storage resources can be called upon easily to serve containers at the compute layer. There is also strong integration of MinIO with OpenShift and where the MinIO Operator can be easily installed through the OpenShift Operator Hub. I would suppose you can replace OpenShift with SUSE® Rancher Kubernetes Engine (RKE) and it would work too. The power of choice in Open Source is very much alive.

TrueNAS SCALE provisioning MinIO tenants using Red Hat Openshift and MinIO Operator

The architecture above can be used as the building blocks of each premises in a distributed organization. The applications and workloads at the can be easily replicated and spun up at other premises. The object storage repositories powered by MinIO can be synchronously replicated between premises using Active-Active bucket-level Replication to create the applications and workloads integration continuity from site to site to site.

At the same time, older buckets used in this architecture can be tiered to a more economical tier of storage such as AWS. MinIO has a data lifecycle and tiering management feature that can move archived and retired buckets to multi clouds.

These open source components I mentioned also have a small footprint. So deploying them on single node devices should not be too heavy. Designing this single, almost seamless integration enterprise-grade, cloud ready data fabric using open source software does not look like a pipe dream anymore. The Edge to Core to Cloud is conceivable using the right open source software.

On-going thoughts

To me, Open Source is about freedom to choose. Pro Choice. More Control.

While integrating with open source source software on a small scale can be very satisfying, the challenge escalates when this small scale design and mindset are applied at the enterprise level. Even harder is when the thing goes distributed, multi premises, multi clouds while maintaining scalability and stitching the integration of all components at a global scale. This is where with the right open source components, with the right architecture design can take organizations further, leveraging the cloud where it matter while maintaining control and choice.

This conceptual design is open for improvements, but keeping the key open source components in its foundation may be a platform for bigger things. I hope it can be a starting framework for a turnkey distributed Kubernetes architecture with a scalable HCI storage to match. I welcome all comments and teachings from learned practitioners.

Tagged , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.