Layers in Storage – For better or worse

Storage arrays and storage services are built upon by layers and layers beneath its architecture. The physical components of hard disk drives and solid states are abstracted into RAID volumes, virtualized into other storage constructs before they are exposed as shares/exports, LUNs or objects to the network.

Everyone in the storage networking industry, is cognizant of the layers and it is the foundation of knowledge and experience. The public cloud storage services side is the same, albeit more opaque. Nevertheless, both have layers.

In the early 2000s, SNIA® Technical Council outlined a blueprint of the SNIA® Shared Storage Model, a framework describing layers and properties of a storage system and its services. It was similar to the OSI 7-layer model for networking. The framework helped many industry professionals and practitioners shaped their understanding and the development of knowledge in their respective fields. The layering scheme of the SNIA® Shared Storage Model is shown below:

SNIA Shared Storage Model – The layering scheme

Storage vendors layering scheme

While SNIA® storage layers were generic and open, each storage vendor had their own proprietary implementation of storage layers. Some of these architectures are simple, but some, I find a bit too complex and convoluted.

Here is an example of the layers of the Automated Volume Management (AVM) architecture of the EMC® Celerra®.

EMC Celerra AVM Layering Scheme

I would often scratch my head about AVM. Disks were grouped into RAID groups, which are LUNs (Logical Unit Numbers). Then they were defined as Celerra® dvols (disk volumes), and stripes of the dvols were consolidated into a storage pool.

From the pool, a piece of a storage capacity construct, called a slice volume, were combined with other slice volumes into a metavolume which eventually was presented as a file system to the network and their respective NAS clients. Explaining this took an effort because I was the IP Storage product manager for EMC® between 2007 – 2009. It was a far cry from the simplicity of NetApp® ONTAP 7 architecture of RAID groups and volumes, and the WAFL® (Write Anywhere File Layout) filesystem.

Another complicated layered framework I often gripe about is Ceph. Here is a look of how the layers of CephFS is constructed.

Ceph Storage Layered Framework

I work with the OpenZFS filesystem a lot. It is something I am rather familiar with, and the layered structure of the ZFS filesystem is essentially simpler.

Storage architecture mixology

Engineers are bizarre when they get too creative. They have a can do attitude that transcends the boundaries of practicality sometimes, and boggles many minds. This is what happens when they have their own mixology ideas.

Recently I spoke to two magnanimous persons who had the idea of providing Ceph iSCSI LUNs to the ZFS filesystem in order to use the simplicity of NAS file sharing capabilities in TrueNAS® CORE. From their own words, Ceph NAS capabilities sucked. I had to draw their whole idea out in a Powerpoint and this is the architecture I got from the conversation.

There are 3 different storage subsystems here just to provide NAS. As if Ceph layers aren’t complicated enough, the iSCSI LUNs from Ceph are presented as Cinder volumes to the KVM hypervisor (or VMware® ESXi) through the Cinder driver. Cinder is the persistent storage volume subsystem of the Openstack® project. The Cinder volumes/hypervisor datastore are virtualized as vdisks to the respective VMs installed with TrueNAS® CORE and OpenZFS filesystem. From the TrueNAS® CORE, shares and exports are provisioned via the SMB and NFS protocols to Windows and Linux respectively.

It works! As I was told, it worked!

A.P.P.A.R.M.S.C. considerations

Continuing from the layered framework described above for NAS, other aspects beside the technical work have to be considered, even when it can work technically.

I often use a set of diligent data storage focal points when considering a good storage design and implementation. This is the A.P.P.A.R.M.S.C. Take for instance Protection as one of the points and snapshot is the technology to use.

Snapshots can be executed at the ZFS level on the TrueNAS® CORE subsystem. Snapshots can be trigged at the volume level in Openstack® subsystem and likewise, rbd snapshots at the Ceph subsystem. The question is, which snapshot at which storage subsystem is the most valuable to the operations and business? Do you run all 3 snapshots? How do you execute them in succession in a scheduled policy?

In terms of performance, can it truly maximize its potential? Can it churn out the best IOPS, and deliver at wire speed? What is the latency we can expect with so many layers from 3 different storage subsystems?

And supporting this said architecture would be a nightmare. Where do you even start the troubleshooting?

Those are just a few considerations and questions to think about when such a layered storage architecture along. IMHO, such a design was over-engineered. I was tempted to say “Just because you can, doesn’t mean you should

Elegance in Simplicity

Einstein (I think) quoted:

Einstein’s quote on simplicity and complexity

I am not saying that having too many layers is wrong. Having a heavily layered architecture works for many storage solutions out there, where they are often masked with a simple and intuitive UI. But in yours truly point of view, as a storage architecture enthusiast and connoisseur, there is beauty and elegance in simple designs.

The purpose here is to promote better understanding of the storage layers, and how they integrate and interact with each other to deliver the data services to the network. In the end, that is how most storage architectures are built.

 

Pondering Redhat’s future with IBM

I woke up yesterday morning with a shocker of a news. IBM announced that they were buying Redhat for USD34 billion. Never in my mind that Redhat would sell but I guess that USD190.00 per share was too tempting. Redhat (RHT) was trading at USD116.68 on the previous Friday’s close.

Redhat is one of my favourite technology companies. I love their Linux development and progress, and I use a lot of Fedora and CentOS in my hobbies. I started with Redhat back in 2000, when I became obsessed to get my RHCE (Redhat Certified Engineer). I recalled on almost every weekend (Saturday and Sunday) back in 2002 when I was in the office, learning Redhat, and hacking scripts to be really good at it. I got certified with RHCE 4 with a 96% passing mark, and I was very proud of my certification.

One of my regrets was not joining Redhat in 2006. I was offered the job as an SE by Josep Garcia, and the very first position in Malaysia. Instead, I took up the Hitachi Data Systems job to helm the project implementation and delivery for the Shell GUSto project. It might have turned out differently if I did.

The IBM acquisition of Redhat left a poignant feeling in me. In many ways, Redhat has been the shining star of Linux. They are the only significant one left leading the charge of open source. They are the largest contributors to the Openstack projects and continue to support the project strongly whilst early protagonists like HPE, Cisco and Intel have reduced their support. They are of course, the perennial top 3 contributors to the Linux kernel since the very early days. And Redhat continues to contribute to projects such as containers and Kubernetes and made that commitment deeper with their recent acquisition of CoreOS a few months back.

Continue reading

The Malaysian Openstack storage conundrum

The Openstack blippings on my radar have ratcheted up this year. I have been asked to put together the IaaS design several times, either with the flavours of RedHat or Ubuntu, and it’s a good thing to see the Openstack interest level going up in the Malaysian IT scene. Coming into its 8th year, Openstack has become a mature platform but in the storage projects of Openstack, my observations tell me that these storage-related projects are not as well known as we speak.

I was one of the speakers at the Openstack Malaysia 8th Summit over a month ago. I started my talk with question – “Can anyone name the 4 Openstack storage projects?“. The response from the floor was “Swift, Cinder, Ceph and … (nobody knew the 4th one)” It took me by surprise when the floor almost univocally agreed that Ceph is one of the Openstack projects but we know that Ceph isn’t one. Ceph? An Openstack storage project?

Besides Swift, Cinder, there is Glance (depending on how you look at it) and the least known .. Manila.

I have also been following on many Openstack Malaysia discussions and discussion groups for a while. That Ceph response showed the lack of awareness and knowledge of the Openstack storage projects among the Malaysian IT crowd, and it was a difficult issue to tackle. The storage conundrum continues to perplex me because many whom I have spoken to seemed to avoid talking about storage and viewing it like a dark art or some voodoo thingy.

I view storage as the cornerstone of the 3 infrastructure pillars  – compute, network and storage – of Openstack or any software-defined infrastructure stack for that matter. So it is important to get an understanding the Openstack storage projects, especially Cinder.

Cinder is the abstraction layer that gives management and control to block storage beneath it. In a nutshell, it allows Openstack VMs and applications consume block storage in a consistent and secure way, regardless of the storage infrastructure or technology beneath it. This is achieved through the cinder-volume service which is a driver most storage vendors integrate with (as shown in the diagram below).

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Diagram in slides is from Mirantis found at https://www.slideshare.net/mirantis/openstack-architecture-43160012

Cinder-volume together with cinder-api, and cinder-scheduler, form the Block Storage Services for Openstack. There is another service, cinder-backup which integrates with Openstack Swift but in my last check, this service is not as popular as cinder-volume, which is widely supported by many storage vendors with both Fibre Channel and iSCSi implementations, and in a few vendors, with NFS and SMB as well. Continue reading

The Prophet has arrived

Early last week, I had a catch up with my friend. He was excited to share with me the new company he just joined. It was ProphetStor. It was a catchy name and after our conversation, I have decided to spend a bit of my weekend afternoon finding out more about the company and its technology.

From another friend at FalconStor, I knew of this company several months ago. Ex-FalconStor executives have ventured to found ProphetStor as the next generation of storage resource orchestration engine. And it has found a very interesting tack to differentiate from the many would-bes of so-called “software-defined storage” leaders. ProphetStor made their early appearance at the OpenStack Summit in Hong Kong back in November last year, positioning several key technologies including OpenStack Cinder, SNIA CDMI (Cloud Data Management Interface) and SMI-S (Storage Management Initiative Specification) to provide federation of storage resources discovery, provisioning and automation. 

The federation of storage resources and services solution is aptly called ProphetStor Federator. The diagram I picked up from the El Reg article presents the Federator working with different OpenStack initiatives quite nicely below:  There are 3 things that attracted me to the uniqueness of ProphetStor.

1. The underlying storage resources, be it files, objects, or blocks, can be presented and exposed as Cinder-style volumes.

2. The ability to define the different performance capabilities and SLAs (IOPS, throughput and latency) from the underlying storage resources and matching them to the right application requirements.

3. The use of SNIA of SMI-S and CDMI Needless to say that the Federator software will abstract the physical and logical structures of any storage brands or storage architectures, giving it a very strong validation of the “software-defined storage (SDS)” concept.

While the SDS definition is still being moulded in the marketplace (and I know that SNIA already has a draft SDS paper out), the ProphetStor SDS concept does indeed look similar to the route taken by EMC ViPR. The use of the control plane (ProphetStor Federator) and the data plane (underlying physical and logical storage resource) is obvious.

I wrote about ViPR many moons ago in my blog and I see ProphetStor as another hat in the SDS ring. I grabbed the screenshot (below) from the ProphetStor website which I thought did beautifully explained what ProphetStor is from 10,000 feet view.

ProphetStor How it works

The Cinder-style volume is a class move. It preserves the sanctity of many enterprise applications which still need block storage volumes but now it comes with a twist. These block storage volumes now will have different capability and performance profiles, tagged with the relevant classifications and SLAs.

And this is where SNIA SMI-S discovery component is critical because SMI-S mines these storage characteristics and presents them to the ProphetStor Federator for storage resource classification. For storage vendors that do not have SMI-S support, ProphetStor can customize the relevant interfaces to the proprietary API to discover the storage characteristics.

On the north-end, SNIA CDMI works with the ProphetStor Federator’s Offer & Provisioning functions to bundle wrap various storage resources for the cloud and other traditional storage network architectures.

I have asked my friend for more technology deep-dive materials (he has yet to reply me) of ProphetStor to ascertain what I have just wrote. (Simon, you have to respond to me!)

This is indeed very exciting times knowing ProphetStor as one of the early leaders in the SDS space. And I like to see ProphetStor go far with this.

Now let us pray … because the prophet has arrived.

And Cloud Storage will make us even stranger

It was a dark and stormy night ….

I was in a car with my host in the stifling traffic jams on the streets of Jakarta. We had just finished dinner and his driver was taking me back to the hotel. It was about 9pm and we were making conversation trying to figure out how we can work together. My host, a wonderful Singaporean who has been residing in Jakarta for more than a decade and a half, owns a distributorship focusing mainly on IT security solutions. He had invited me over to Jakarta to give a talk on Cloud Storage at the Indonesia CIO Network event on January 9th 2013.

I was there to represent SNIA South Asia to give a talk about CDMI (Cloud Data Management Interface), and my host also took the opportunity to introduce Nutanix, a SAN-less 2-tier, high-performance, virtualized data center platform. (Note: That’s quite a mouthful, but gotta include all the buzz-words in there). It was my host’s first foray into storage networking solutions, away from his usual security solutions spread. As the conversation went on in the car, he said “You storage guys are so strange!“.

To many of the IT folks who have been involved in OS, applications, security, and networking, to say a few, storage is like a dark art, some mumbo jumbo, voodoo-like science known to a select few. That’s great, because this perception will keep us relevant, and still have the value and a job. To me, that just fine and dandy, and I like it that way. 🙂

In preparation to the event, I have to learn up SNIA CDMI. Cloud and Storage … Cloud and Storage … Cloud and Storage. Hmmm …. Continue reading

Houston, we have an OpenStack problem

I have always wanted to look deeper into OpenStack, but I never got around to it. However, last week, something about NASA and OpenStack caught my attention … something about NASA pulling out of OpenStack development.

The spin was that “OpenStack has come on its own” is true, because OpenStack today has 180 (at last count on June 20th 2012) companies participating and contributing to the development, deployment and marketing of the highly popular Infrastructure-as-a-Service cloud computing project. So, the NASA withdrawal was not as badly felt as to what NASA had said next.

When NASA CIO Linda Cureton announced that NASA has shifted to Amazon Web Services (AWS) for their enterprise cloud-based infrastructure and they have saved almost a million dollars in costs, that was a clear and blatant impalement to the very heart and soul of OpenStack. NASA, one of the 2 founders of OpenStack in 2009, has switched sides to announce their preference to OpenStack’s rival, AWS. It pains me to just listen to the such a defection. Continue reading

A cloud economy emerges … somewhat

A few hours ago, Rackspace had just announced the first “productized” Rackspace Private Cloud solution based on OpenStack. According to Openstack.org,

OpenStack OpenStack is a global collaboration of developers and cloud computing 
technologists producing the ubiquitous open source cloud computing platform for 
public and private clouds. The project aims to deliver solutions for all types of 
clouds by being simple to implement, massively scalable, and feature rich. 
The technology consists of a series of interrelated projects delivering various 
components for a cloud infrastructure solution.

Founded by Rackspace Hosting and NASA, OpenStack has grown to be a global software 
community of developers collaborating on a standard and massively scalable open 
source cloud operating system. Our mission is to enable any organization to create 
and offer cloud computing services running on standard hardware. 
Corporations, service providers, VARS, SMBs, researchers, and global data centers 
looking to deploy large-scale cloud deployments for private or public clouds 
leveraging the support and resulting technology of a global open source community.
All of the code for OpenStack is freely available under the Apache 2.0 license. 
Anyone can run it, build on it, or submit changes back to the project. We strongly 
believe that an open development model is the only way to foster badly-needed cloud 
standards, remove the fear of proprietary lock-in for cloud customers, and create a 
large ecosystem that spans cloud providers.

And Openstack just turned 1 year old.

So, what’s this Rackspace private cloud about?

In the existing cloud economy, customers subscribe from a cloud service provider. The customer pays a monthly (usually) subscription fee in a pay-as-you-use-model. And I have courageously predicted that the new cloud economy will drive the middle tier (i.e. IT distributors, resellers and system integrators) in my previous blog out of IT ecosystem. Before I lose the plot, Rackspace is now providing the ability for customers to install an Openstack-ready, Rackspace-approved private cloud architecture in their own datacenter, not in Rackspace Hosting.

This represents a tectonic shift in the cloud economy, putting the control and power back into the customers’ hands. For too long, there were questions about data integrity, security, control, cloud service provider lock-in and so on but with the new Rackspace offering, customers can build their own private cloud ecosystem or they can get professional service from Rackspace cloud systems integrators. Furthermore, once they have built their private cloud, they can either manage it themselves or get Rackspace to manage it for them.

How does Rackspace do it?

From their vast experience in building Openstack clouds, Rackspace Cloud Builders have created a free reference architecture.  Currently OpenStack focuses on two key components: OpenStack Compute, which offers computing power through virtual machine and network management, and OpenStack Object Storage, which is software for redundant, scalable object storage capacity.

In the Openstack architecture, there are 3 major components – Compute, Storage and Images.

More information about the Openstack Architecture here. And with 130 partners in the Openstack alliance (which includes Dell, HP, Cisco, Citrix and EMC), customers have plenty to choose from, and lessening the impact of lock-in.

What does this represent to storage professionals like us?

This Rackspace offering is game changing and could perhaps spark an economy for partners to work with Cloud Service Providers. It is definitely addressing some key concerns of customers related to security and freedom to choose, and even change service providers. It seems to be offering the best of both worlds (for now) but Rackspace is not looking at this for immediate gains. But we still do not know how this economic pie will grow and how it will affect the cloud economy. And this does not negate the fact that us storage professionals have to dig deeper and learn more and this not does change the fact that we have to evolve to compete against the best in the world.

Rackspace has come out beating its chest and predicted that the cloud computing API space will boil down these 3 players – Rackspace Openstack, VMware and Amazon Web Services (AWS). Interestingly, Redhat Aeolus (previously known as Deltacloud) was not worthy to mentioned by Rackspace. Some pooh-pooh going on?