The Openstack blippings on my radar have ratcheted up this year. I have been asked to put together the IaaS design several times, either with the flavours of RedHat or Ubuntu, and it’s a good thing to see the Openstack interest level going up in the Malaysian IT scene. Coming into its 8th year, Openstack has become a mature platform but in the storage projects of Openstack, my observations tell me that these storage-related projects are not as well known as we speak.
I was one of the speakers at the Openstack Malaysia 8th Summit over a month ago. I started my talk with question – “Can anyone name the 4 Openstack storage projects?“. The response from the floor was “Swift, Cinder, Ceph and … (nobody knew the 4th one)” It took me by surprise when the floor almost univocally agreed that Ceph is one of the Openstack projects but we know that Ceph isn’t one. Ceph? An Openstack storage project?
Besides Swift, Cinder, there is Glance (depending on how you look at it) and the least known .. Manila.
I have also been following on many Openstack Malaysia discussions and discussion groups for a while. That Ceph response showed the lack of awareness and knowledge of the Openstack storage projects among the Malaysian IT crowd, and it was a difficult issue to tackle. The storage conundrum continues to perplex me because many whom I have spoken to seemed to avoid talking about storage and viewing it like a dark art or some voodoo thingy.
I view storage as the cornerstone of the 3 infrastructure pillars – compute, network and storage – of Openstack or any software-defined infrastructure stack for that matter. So it is important to get an understanding the Openstack storage projects, especially Cinder.
Cinder is the abstraction layer that gives management and control to block storage beneath it. In a nutshell, it allows Openstack VMs and applications consume block storage in a consistent and secure way, regardless of the storage infrastructure or technology beneath it. This is achieved through the cinder-volume service which is a driver most storage vendors integrate with (as shown in the diagram below).
Cinder-volume together with cinder-api, and cinder-scheduler, form the Block Storage Services for Openstack. There is another service, cinder-backup which integrates with Openstack Swift but in my last check, this service is not as popular as cinder-volume, which is widely supported by many storage vendors with both Fibre Channel and iSCSi implementations, and in a few vendors, with NFS and SMB as well.
The cinder-api is the layer that interfaces with the Openstack VMs, applications, services and users. It provides the API (Application Programming Interface) to allow Openstack resources and applications to access block storage services via software functions, scripts and other procedure calls. The cinder-scheduler has selection methods to choose the optimal storage vendor to create block volumes. These methods can be influenced through filters and weights, allowing Openstack to choose the right storage policy for its block volumes at different requirement points. The inter-process communication and messaging between various Cinder process are through message queues, with RabbitMQ being one of the most popular messaging queues.
Note: I did not focus on cinder-backup because the storage vendor integration support to Openstack Swift is not as comprehensive as cinder-volume.
Storage vendors have cinder-volume drivers. Some have a single, unified driver for all their storage platforms and models whilst others have several drivers for each of their platform or model. The basic operations are required to be Cinder driver, and these include:
- Create Volume
- Delete Volume
- Attach Volume
- Detach Volume
- Extend Volume
- Create Snapshot
- Delete Snapshot
- Create Volume from Snapshot
- Create Volume from Volume (clone)
- Create Image from Volume
- Volume Migration (host assisted)
The operations supported by each storage vendor differ from platform to platform, model to model. For instance, a quick check of the HPE Cinder drivers in the latest Rocky release revealed the different and inconsistency operations supported in each HPE storage platform:
A full list of vendor-specific Cinder drivers can be found here. And newer ones are joining the bandwagon in every update.
So, back to my earlier conundrum. It is the aspiration of this naggy blogger to increase the awareness of storage services in Openstack. It is important to go deep technical with storage in order to take full advantage of the block storage provider, delivered via Fibre Channel, iSCSI protocols.
And even more exciting is the burgeoning NVMe (non-volatile memory express) and NVMe-oF (NVMe over Fabrics) storage protocols. In my talk at the Malaysian Openstack summit, I presented the case study of Excelero accelerating a mindblowing 2,000% storage performance jump for an Openstack Cloud provider, teuto.net, in Germany.
Using Excelero’s patented RDDA (Remote Direct Disk Access) and Mellanox ROCEv2 networking and their own Openstack Cinder driver, achieve the supercharged storage performance, far beyond what the original Ceph architecture could achieve at teuto.net. The architecture below was shared with me by Excelero:
Seeing new performance possibilities (and also new capabilities) coming coming from many storage vendors contributing to the Openstack storage projects, I hope storage will get better recognition with Openstack enthusiasts, practitioners and IT architects in the Malaysian IT scene.