Is Software Defined right for Storage?

George Herbert Leigh Mallory, mountaineer extraordinaire, was once asked “Why did you want to climb Mount Everest?“, in which he replied “Because it’s there“. That retort demonstrated the indomitable human spirit and probably exemplified best the relationship between the human being’s desire to conquer the physical limits of nature. The software of humanity versus the hardware of the planet Earth.

Juxtaposing, similarities can be said between software and hardware in computer systems, in storage technology per se. In it, there are a few schools of thoughts when it comes to delivering storage services with the notable ones being the storage appliance model and the software-defined storage model.

There are arguments, of course. Some are genuinely partisan but many a times, these arguments come in the form of the flavour of the moment. I have experienced in my past companies touting the storage appliance model very strongly in the beginning, and only to be switching to a “software company” chorus years after that. That was what I meant about the “flavour of the moment”.

Software Defined Storage

Continue reading

A FreeNAS Compression Tale

David vs Goliath Credit: Miguel Robledo of https://www.artstation.com/miguel_robledo

David vs Goliath

It was an underdog tale worthy of the biblical book of Samuel. When I first caught wind of how FreeNAS™ compression prowess was going against NetApp® compression and deduplication in one use case, I had to find out more. And the results in this use case was quite impressive considering that FreeNAS™ (now known as TrueNAS® CORE) is the free, open source storage operating system and NetApp® Data ONTAP, is the industry leading, enterprise, “king of the hill” storage data management software.

Certainly a David vs Goliath story.

Compression in FreeNAS

Ah, Compression! That technology that is often hidden, hardly seen and often forgotten.

Compression is a feature within FreeNAS™ that seldom gets the attention. It works, and certainly is a mature form of data footprint reduction (DFR) technology, along with data deduplication. It is switched on by default, and is the setting when creating a dataset, as shown below:

Dataset creation with Compression (lz4) turned on

The default compression algorithm is lz4 which is fast but poor in compression ratio compared to gzip and bzip2. However, lz4 uses less CPU cycles to perform its compression and decompression processing, and thus the impact on FreeNAS™ and TrueNAS® is very low.

NetApp® ONTAP, if I am not wrong, uses lzopro as default – a commercial and optimized version of the open source LZO compression library. In addition, NetApp also has their data deduplication technology as well, something OpenZFS has to improve upon in the future.

The DFR report

This brings us to the use case at one of iXsystems™ customers in Taiwan. The data to be reduced are mostly log files at the end user, and the version of FreeNAS™ is 11.2u7. There are, of course, many factors that affect the data reduction ratio, but in this case of 4 scenarios,  the end user has been running this in production for over 2 months. The results:

FreeNAS vs NetApp Data Footprint Reduction

In 2 of the 4 scenarios, FreeNAS™ performed admirably with just the default lz4 compression alone, compared to NetApp® which was running both their inline compression and deduplication.

The intention to post this report is not to show that FreeNAS™ is better in every case. It won’t be, and there are superior data footprint reduction tech out there which can outperform it. But I would expect potential and existing end users to leverage on the compression capability of FreeNAS™ which is getting better all the time.

A better compression algorithm

Followers of OpenZFS are aware of the changing of times with OpenZFS version 2.0. One exciting update is the introduction of the zstd compression algorithm into OpenZFS late last year, and is already in TrueNAS® CORE and Enterprise version 12.x.

What is zstd? zstd is a fast compression algorithm that aims to be as efficient (or better) than gzip, but with better speed closer to lz4, relatively. For a long time, the gzip compression algorithm, from levels 1-9, has been serving very good compression ratio compared to many compression algorithms, lz4 included.

However, the efficiency came at a higher processing price and thus took a longer time. At the other end, lz4 is fast and lightweight, but its reduction ratio efficiency is very poor. zstd intends to be the in-between of gzip and lz4. In the latest results published by Facebook’s github page,

zstd performance benchmark against other compression algorithms

For comparison, zstd (level -1) performed very well against zlib, the data compression library in gzip. It was made known there are 22 levels of compression in zstd but I do not know how many levels are accepted in the OpenZFS development.

At the same time, compression takes advantage of multi-core processing, and actually can speed up disk I/O response because the original dataset to be processed is smaller after the compression reduction.

While TrueNAS® still defaults lz4 compression as of now, you can probably change the default compression with a command

# zfs set compression=zstd-6 pool/dataset

Your choice

TrueNAS® and FreeNAS™ support multiple compression algorithms. lz4, gzip and now zstd. That gives the administrator a choice to assign the right compression algorithm based on processing power, storage savings, and time to get the best out of the data stored in the datasets.

As far as the David vs Goliath tale goes, this real life use case was indeed a good one to share.

 

The hot cold times of HCI

Hyperconverged Infrastructure (HCI) is a hot technology. It has been for the past decade since Nutanix™ took the first mover advantage from the Converged Infrastructure (CI) technology segment and made it pretty much its ownfor a while.

Hyper Converged Infrastructure

But the HCI market (not the technology) is a strange one. It is hot. It is cold. The perennial leader, Nutanix™, has yet to eke out a profitable year. VMware® is strong in the market. Cisco™, which was hot with their HyperFlex solution in 2019, was also stopped short with a dismal decline in the IDC Worldwide HCI 2Q2020 tracker below:

IDC Worldwide Hyperconverged Infrastructure Tracker – 2Q2020

dHCI = Disaggregated or discombobulated? 

dHCI is known as disaggregated HCI. The disaggregation part is disaggregated hardware, especially on the storage part. Vendors like HPE® with Nimble Storage, Hitachi Vantara, NetApp® and a few more have touted the disaggregation of the performance and capacity, the separation of storage and compute as a value proposition but through close inspection, it is just another marketing ploy to attach a SAN storage to servers. It was marketing old wine in a new bottle. As rightly pointed out by my friend, Charles Chow of Commvault® quoted in his blog

Continue reading

Layers in Storage – For better or worse

Storage arrays and storage services are built upon by layers and layers beneath its architecture. The physical components of hard disk drives and solid states are abstracted into RAID volumes, virtualized into other storage constructs before they are exposed as shares/exports, LUNs or objects to the network.

Everyone in the storage networking industry, is cognizant of the layers and it is the foundation of knowledge and experience. The public cloud storage services side is the same, albeit more opaque. Nevertheless, both have layers.

In the early 2000s, SNIA® Technical Council outlined a blueprint of the SNIA® Shared Storage Model, a framework describing layers and properties of a storage system and its services. It was similar to the OSI 7-layer model for networking. The framework helped many industry professionals and practitioners shaped their understanding and the development of knowledge in their respective fields. The layering scheme of the SNIA® Shared Storage Model is shown below:

SNIA Shared Storage Model – The layering scheme

Storage vendors layering scheme

While SNIA® storage layers were generic and open, each storage vendor had their own proprietary implementation of storage layers. Some of these architectures are simple, but some, I find a bit too complex and convoluted.

Here is an example of the layers of the Automated Volume Management (AVM) architecture of the EMC® Celerra®.

EMC Celerra AVM Layering Scheme

I would often scratch my head about AVM. Disks were grouped into RAID groups, which are LUNs (Logical Unit Numbers). Then they were defined as Celerra® dvols (disk volumes), and stripes of the dvols were consolidated into a storage pool.

From the pool, a piece of a storage capacity construct, called a slice volume, were combined with other slice volumes into a metavolume which eventually was presented as a file system to the network and their respective NAS clients. Explaining this took an effort because I was the IP Storage product manager for EMC® between 2007 – 2009. It was a far cry from the simplicity of NetApp® ONTAP 7 architecture of RAID groups and volumes, and the WAFL® (Write Anywhere File Layout) filesystem.

Another complicated layered framework I often gripe about is Ceph. Here is a look of how the layers of CephFS is constructed.

Ceph Storage Layered Framework

I work with the OpenZFS filesystem a lot. It is something I am rather familiar with, and the layered structure of the ZFS filesystem is essentially simpler.

Storage architecture mixology

Engineers are bizarre when they get too creative. They have a can do attitude that transcends the boundaries of practicality sometimes, and boggles many minds. This is what happens when they have their own mixology ideas.

Recently I spoke to two magnanimous persons who had the idea of providing Ceph iSCSI LUNs to the ZFS filesystem in order to use the simplicity of NAS file sharing capabilities in TrueNAS® CORE. From their own words, Ceph NAS capabilities sucked. I had to draw their whole idea out in a Powerpoint and this is the architecture I got from the conversation.

There are 3 different storage subsystems here just to provide NAS. As if Ceph layers aren’t complicated enough, the iSCSI LUNs from Ceph are presented as Cinder volumes to the KVM hypervisor (or VMware® ESXi) through the Cinder driver. Cinder is the persistent storage volume subsystem of the Openstack® project. The Cinder volumes/hypervisor datastore are virtualized as vdisks to the respective VMs installed with TrueNAS® CORE and OpenZFS filesystem. From the TrueNAS® CORE, shares and exports are provisioned via the SMB and NFS protocols to Windows and Linux respectively.

It works! As I was told, it worked!

A.P.P.A.R.M.S.C. considerations

Continuing from the layered framework described above for NAS, other aspects beside the technical work have to be considered, even when it can work technically.

I often use a set of diligent data storage focal points when considering a good storage design and implementation. This is the A.P.P.A.R.M.S.C. Take for instance Protection as one of the points and snapshot is the technology to use.

Snapshots can be executed at the ZFS level on the TrueNAS® CORE subsystem. Snapshots can be trigged at the volume level in Openstack® subsystem and likewise, rbd snapshots at the Ceph subsystem. The question is, which snapshot at which storage subsystem is the most valuable to the operations and business? Do you run all 3 snapshots? How do you execute them in succession in a scheduled policy?

In terms of performance, can it truly maximize its potential? Can it churn out the best IOPS, and deliver at wire speed? What is the latency we can expect with so many layers from 3 different storage subsystems?

And supporting this said architecture would be a nightmare. Where do you even start the troubleshooting?

Those are just a few considerations and questions to think about when such a layered storage architecture along. IMHO, such a design was over-engineered. I was tempted to say “Just because you can, doesn’t mean you should

Elegance in Simplicity

Einstein (I think) quoted:

Einstein’s quote on simplicity and complexity

I am not saying that having too many layers is wrong. Having a heavily layered architecture works for many storage solutions out there, where they are often masked with a simple and intuitive UI. But in yours truly point of view, as a storage architecture enthusiast and connoisseur, there is beauty and elegance in simple designs.

The purpose here is to promote better understanding of the storage layers, and how they integrate and interact with each other to deliver the data services to the network. In the end, that is how most storage architectures are built.

 

Discovering OpenZFS Fusion Pool

Fusion Pool excites me, but unfortunately this new key feature of OpenZFS is hardly talked about. I would like to introduce the Fusion Pool feature as iXsystems™ expands the TrueNAS® Enterprise storage conversations.

I would not say that this technology is revolutionary. Other vendors already have the similar concept of Fusion Pool. The most notable (to me) is NetApp® Flash Pool, and I am sure other enterprise storage vendors have the same. But this is a big deal (for me) for an open source file system in OpenZFS.

What is Fusion Pool  (aka ZFS Allocation Classes)?

To understand Fusion Pool, we have to understand the basics of the ZFS zpool. A zpool is the aggregation (borrowing the NetApp® terminology) of vdevs (virtual devices), and vdevs are a collection of physical drives configured with the OpenZFS RAID levels (RAID-0, RAID-1, RAID-Z1, RAID-Z2, RAID-Z3 and a few nested RAID permutations). A zpool can start with one vdev, and new vdevs can be added on-the-fly, expanding the capacity of the zpool online.

There are several types of vdevs prior to Fusion Pool, and this is as of pre-TrueNAS® version 12.0. As shown below, these are the types of vdevs available to the zpool at present.

OpenZFS zpool and vdev types – Credit: Jim Salter and Arstechnica

Fusion Pool is a zpool that integrates with a new, special type of vdev, alongside other normal vdevs. This special vdev is designed to work with small data blocks between 4-16K, and is highly efficient in handling random reading and writing of these small blocks. This bodes well with the OpenZFS file system metadata blocks and other blocks of small files. And the random nature of the Read/Write I/Os works best with SSDs (can be read or write intensive SSDs).

Continue reading

Storageless shan’t be thy name

Storageless??? What kind of a tech jargon is that???

This latest jargon irked me. Storage vendor NetApp® (through its acquisition of Spot) and Hammerspace, a metadata-driven storage agnostic orchestration technology company, have begun touting the “storageless” tech jargon in hope that it will become an industry buzzword. Once again, the hype cycle jargon junkies are hard at work.

Clear, empty storage containers

Clear, nondescript storage containers

It is obvious that the storageless jargon wants to ride on the hype of serverless computing, an abstraction method of computing resources where the allocation and the consumption of resources are defined by pieces of programmatic code of the running application. The “calling” of the underlying resources are based on the application’s code, and thus, rendering the computing resources invisible, insignificant and not sexy.

My stand

Among the 3 main infrastructure technology – compute, network, storage, storage technology is a bit of a science and a bit of dark magic. It is complex and that is what makes storage technology so beautiful. The constant innovation and technology advancement continue to make storage as a data services platform relentlessly interesting.

Cloud, Kubernetes and many data-as-a-service platforms require strong persistent storage. As defined by NIST Definition of Cloud Computing, the 4 of the 5 tenets – on-demand self-service, resource pooling, rapid elasticity, measured servicedemand storage to be abstracted. Therefore, I am all for abstraction of storage resources from the data services platform.

But the storageless jargon is doing a great disservice. It is not helping. It does not lend its weight glorifying the innovations of storage. In fact, IMHO, it felt like a weighted anchor sinking storage into the deepest depth, invisible, insignificant and not sexy. I am here dutifully to promote and evangelize storage innovations, and I am duly unimpressed with such a jargon.

Continue reading

A Paean to NFS

It is certainly encouraging to see both NAS protocols, NFS and SMB, featured well in the latest VMware® vSAN 7 Update 1 release. The NFS v3 and v4.1 support was already in vSAN 7.0 when it was earlier announced as part of its Native File Services for vSAN. But some years ago, NFS was not always the primary storage protocol of choice. SAN protocols, Fibre Channel and iSCSI, were almost always designated to serve enterprise applications. At the client side, Windows became prominent, and the SMB/CIFS protocol dominated the landscape of the desktop. This further pushed NFS into the back closet.

NFS or Network File System has its naysayers. The venerable, but often maligned distributed network file protocol is 36 years today. In storage vendors such as NetApp®, VAST Data, Pure Storage FlashBlade, and Dell EMC Isilon, NFS is still positioned as the primary file protocol for manufacturing testers on the shop floor, EDA/eCAD applications, seismic and subsurface applications in Oil & Gas and many more. In another development, just like its presence in the vSAN Native Services,, NFS has also quietly embedded itself into many storage platforms to serve the data platform services within the respective framework itself.

And I have experienced NFS from the client side to the enterprise applications and more, and I take this opportunity to pay tribute.

NFS (Network File System) client server network

NFS (Network File System) client server network

Continue reading

Valuing the security value of NAS storage

Garmin paid, reportedly millions. Do you sleep well at night knowing that the scourge of ransomware is rampant and ever threatening your business. Is your storage safe enough or have you invested in a storage which was the economical (also to be known as cheap) to your pocket?

Garmin was hacked by ransomware

I have highlighted this before. NAS (Network Attached Storage) has become the goldmine for ransomware. And in the mire of this COVID-19 pandemic, the lackadaisical attitude of securing the NAS storage remains. Too often than not, end users and customers, especially in the small medium enterprises segment, continue to search for the most economical NAS storage to use in their business.

Is price the only factor?

Why do customers and end users like to look at the price? Is an economical capital outlay of a cheap NAS storage with 3-year hardware and shallow technical support that significant to appease the pocket gods? Some end users might decided to rent cloud file storage, Hotel California style until they counted the 3-year “rental” price.

Continue reading

Persistent Storage could stifle Google Anthos multi-cloud ambitions

To win in the multi-cloud game, you have to be in your competitors’ cloud. Google Cloud has been doing that since they announced Google Anthos just over a year ago. They have been crafting their “assault”, starting with on-premises, and Anthos on AWS. Anthos on Microsoft® Azure is coming, currently in preview mode.

Google CEO Sundar Pichai announcing Google Anthos at Next ’19

BigQuery Omni conversation starter

2 weeks ago, whilst the Google Cloud BigQuery Omni announcement was still under wraps, local Malaysian IT portal Enterprise IT News sent me the embargoed article to seek my views and opinions. I have to admit that I was ignorant about the deeper workings of BigQuery, and haven’t fully gone through the works of Google Anthos as well. So I researched them.

Having done some small works on Qubida (defunct) and Talend several years ago, I have grasped useful data analytics and data enablement concepts, and so BigQuery fitted into my understanding of BigQuery Omni quite well. That triggered my interests to write this blog and meshing the persistent storage conundrum (at least for me it is something to be untangled) to Kubernetes, to GKE (Google Kubernetes Engine), and thus Anthos as well.

For discussion sake, here is an overview of BigQuery Omni.

An overview of Google Cloud BigQuery Omni on multiple cloud providers

My comments and views are in this EITN article “Google Cloud’s BigQuery Omni for Multi-cloud Analytics”.

Continue reading