Figuring out storage for Kubernetes and containers

Oops! I forgot about you!

To me, containers and container orchestration (CO) engines such as Kubernetes, Mesos, Docker Swarm are fantastic. They scale effortlessly and are truly designed for cloud native applications (CNA).

But one thing irks me. Storage management for containers and COs. It was as if when they designed and constructed containers and the containers orchestration (CO) engines, they forgot about the considerations of storage and storage management. At least the persistent part of storage.

Over a year ago, I was in two minds about persistent storage, especially when it comes to the transient nature of microservices which was so prevalent and were inundating the cloud native applications landscape. I was searching for answers in my blog. The decentralization of microservices in containers means mass deployment at the edge, but to have the pre-processed and post-processed data stick to the persistent storage at the edge device is a challenge. The operative word here is “STICK”.

Two different worlds

Containers were initially designed and built for lightweight applications such as microservices. The runtime, libraries, configuration files and dependencies are all in one package. They were meant to do simple tasks quickly and scales to thousands easily. They could be brought up and brought down in little time and did not have to bother about the persistent data stored by the host. The state of the containers were also not important to the application tasks at hand.

Today containers like Docker have matured to run enterprise applications and the state of the container is important. The applications must know the state and the health of the container. The container could be in online mode, online but not accepting data mode, suspended mode, paused mode, interrupted mode, quiesced mode or halted mode. Each mode or state of the container is important to the running applications and the container can easily brought up or down in an instance of a command. The stateful nature of the containers and applications is critical for the business. The same situation applies to container orchestration engines such as Kubernetes.

Container and Kubernetes Storage

Docker provides 3 methods to local storage. In the diagram below, it describes:

  1. Bind Mount – which is either an absolute or relative path of a local file system. The path is known to the local host applications and can be modified by scripts and applications with the right access.
  2. Volumes – This is usually a new and separate directory created and managed by Docker. The “volume” which is a directory in the local file system, should not be modified by other applications in the host. There are specific Docker commands to create and delete the volumes. This is usually the preferred method for persistent storage for Docker containers.
  3. tmpfs mount – is a host-only in-memory storage location. The data in the tmpfs mount is processed in the memory space and never written to disk.

Storage in Kubernetes has a larger scope because it has to support many pods and containers in the cluster. Therefore, there are many volume types in Kubernetes. The table below are all the types of volumes supported by Kubernetes. All of them are ‘in-tree volume plug-ins‘, which mean that they are linked, build, compiled and shipped with core Kubernetes binaries, except for two volume types.

As of Kubernetes version 1.8 (current version is 1.14), there is no further development of in-tree volume plug-ins, in favour of 3rd party volume plug-ins from various storage providers. The two obvious volume types from the table, in red, are the ‘out-of-tree volume plug-ins‘ – FlexVolume (flexvolume) and Container Storage Interface (csi).

Even though both FlexVolume and CSI can co-exist, there has been a concerted effort to standardize CSI as the storage framework for Kubernetes and other CO engines for the present and future, because there are known limitations with FlexVolume.

Container Storage Interface (CSI)

CSI aims to be the unifying factor to marry both the CO engines and the storage providers.

Diving deeper, within the Kubernetes cluster, there exist auxiliary containers alongside the main containers of the pod. these auxiliary containers are called sidecar containers, and they function to extend and enhance the functions of the main containers.

As of Kubernetes 1.13, these are the key sidecar containers (below), and more are being added at this present moment.

The main containers (in the blue box below) “moves” the storage communication to the sidecars containers (in the green box below). Communicating over IPC (interprocess communication – listed as Unix Domain Sockets in the documentation), the node-driver-registrar, the external-provisioner and the external-attacher sends gRPC (Google Remote Procedure Calls) to the External 3rd party storage component of the respective vendor (in the brown box below)

Source: https://medium.com/google-cloud/understanding-the-container-storage-interface-csi-ddbeb966a3b?fbclid=IwAR3cTf4KVMNmERWRvoBIn7OV_xW1PEzOVNDwkURLDpHEBePDXsZdExC3ook

Each storage vendor would have implemented their respective API to the gRPC service functions, and they in turn have 3 sub-functions as well. They are listed as:

  • CSI Identity
  • CSI Controller
  • CSI Node

I will not get into the details of each of the sidecar containers and its functions as well as the sub-functions of the 3rd party storage vendor’s implementation. I am still learning more about each respective one but there is a great article written, carrying a wealth of details of each one of them.

We are seeing more sidecar containers added in each version of Kubernetes and these storage implementations will continue to change at a rapid pace.

It is getting more confusing

At Google Next 2019 just last week, Google announced Anthos, their hybrid and multi-cloud platform. At the heart of Anthos is Google Kubernetes Engine (GKE). The extension of GKE into on-premises as part of its hybrid cloud strategy is GKE-on-prem (GKEOP). At this present moment GKEOP runs as a virtual appliance on VMware vSphere 6.5. This means that instead of considering CSI, the default storage class for GKEOP is vsphereVolume.

Here comes the Robin

As part of the GKEOP announcement, Google introduced an advanced data management “middleware”, a storage API powered by Robin Systems Storage. The Robin Storage works with vSphere, aggregates all registered storage and builds a layer of persistent storage enterprise applications running on GKEOP containers. The Robin Storage provides automated provisioning, point-in-time snapshots, backup and recovery, application cloning, QoS guarantee, and multi-cloud migration for stateful applications on Kubernetes.

My current take

I am not impressed. Storage management seems to be like an anchor weighing on Kubernetes and the containers with it. The ground that Kubernetes wants to build storage on keeps shifting from one to another, from FlexVolume to CSI and now with Robin Systems Storage in GKEOP.

Enterprise applications wants a stateful and persistent storage that should also be consistent in its design and implementation and but I am not getting that vibe.

I am still learning the pieces of the storage frameworks tied to Kubernetes and Google’s march with GKEOP. Maybe I am getting all these things wrong, but I would wish things will get clearer and more ‘standardized’ for Kubernetes storage frameworks.

 

 

Tagged , , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

2 Responses to Figuring out storage for Kubernetes and containers

  1. Charles Chow says:

    To be fair, the initial design for containers were never meant for what it is today and as with most good tech, slowly adapted and built to fit into the enterprise. Storage is one such adaptation as containers were never intended to be stateful or persistent. Having said that, I believe it opens up a lot of opportunities for everyone to innovate and design services around it with little attachment to how it was previously done.

  2. Jeffry Johar says:

    PODs in Kuberneters are meant to be ephemeral so does the storage that comes with it.

    If one would requires a enterprise class storage for his Kubernetes cluster, it would be best if he get it form the other services like ftp( oh no!!), nfs or MinIo or FreeNAS!!! 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.