Open Source on my mind

Last week was cropped with topics around Open Source software. I want to voice my opinions here (with a bit of ranting) and hoping not to rouse many abhorrent comments from different parties and views. This blog is to create conversations, even controversial ones, but we must first agree that there will be disagreements. We must accept disagreements as part of this conversation.

In my 30 years career, Open Source has been a big part of my development and progress. The ideas of freely using (certain) software without any licensing implications and these software being openly available were not always welcomed, as they are now. I think the Open Source revolution has created an innovation movement that is still going strong, and it has not only permeated completely into the IT industry, Open Source has also now in almost every part of the technology-based industries as well. The Open Source influence is massive.

Open Source word cloud

In the beginning

In the beginning, in my beginning in 1992, the availability of software and its source codes was a closed one. Coming from a VAX/VMS background (I was a system admin in my mathematics department’s mini computers), Unix liberated my thinking. The final 6 months in the university was systems programming in C, and it completely changed how I wanted my career to shape. The mantra of “Free as in Freedom” in General Public License GPL (which I got know of much later) boded well with my own tenets in life.

If closed source development models led to proprietary software and a centralized way to distributing software with license, I would count the Open Source development models as one of the earliest decentralized technology frameworks. Down with the capitalistic corporations (aka Evil Empires)!

It was certainly a wonderful and generous way to make the world that it is today. It is a better world now.

Continue reading

Celebrating MinIO

Essentially MinIO is a web server …

I vaguely recalled Anand Babu Periasamy (AB as he is known), the CEO of MinIO saying that when I first met him in 2017. I was fresh “playing around” with MinIO and instantly I fell in love with software technology. Wait a minute. Object storage wasn’t supposed to be so easy. It was not supposed to be that simple to set up and use, but MinIO burst into my storage universe like the birth of the Infinity Stones. There was a eureka moment. And I was attending one of the Storage Field Days in the US shortly after my MinIO discovery in late 2017. What an opportunity!

I could not recall how I made the appointment to meeting MinIO, but I recalled myself taking an Uber to their cosy office on University Avenue in Palo Alto to meet. Through Andy Watson (one of the CTOs then), I was introduced to AB, Garima Kapoor, MinIO’s COO and his wife, Frank Wessels, Zamin (one of the business people who is no longer there) and Ugur Tigli (East Coast CTO) who was on the Polycom. I was awe struck.

Last week, MinIO scored a major Series B round funding of USD103 million. It was delayed by the pandemic because I recalled Garima telling me that the funding was happening in 2020. But I think the delay made it better, because the world now is even more ready for MinIO than ever before.

Continue reading

Open Source Storage Technology Crafters

The conversation often starts with a challenge. “What’s so great about open source storage technology?

For the casual end users of storage systems, regardless of SAN (definitely not Fibre Channel) or NAS on-premises, or getting “files” from the personal cloud storage like Dropbox, OneDrive et al., there is a strong presumption that open source storage technology is cheap and flaky. This is not helped with the diet of consumer brands of NAS in the market, where the price is cheap, but the storage offering with capabilities, reliability and performance are found to be wanting. Thus this notion floats its way to the business and enterprise users, and often ended up with a negative perception of open source storage technology.

Highway Signpost with Open Source wording

Storage Assemblers

Anybody can “build” a storage system with open source storage software. Put the software together with any commodity x86 server, and it can function with the basic storage services. Most open source storage software can do the job pretty well. However, once the completed storage technology is put together, can it do the job well enough to serve a business critical end user? I have plenty of sob stories from end users I have spoken to in these many years in the industry related to so-called “enterprise” storage vendors. I wrote a few blogs in the past that related to these sad situations:

We have such storage offerings rigged with cybersecurity risks and holes too. In a recent Unit 42 report, 250,000 NAS devices are vulnerable and exposed to the public Internet. The brands in question are mentioned in the report.

I would categorize these as storage assemblers.

Continue reading

Don’t go to the Clouds. Come back!

Almost in tandem last week, Nutanix™ and HPE appeared to have made denigrated comments about Cloud First mandates of many organizations today. Nutanix™ took to the annual .NEXT conference to send the message that cloud is wasteful. HPE campaigned against a UK Public Sector “Cloud First” policy.

Cloud First or Cloud Not First

The anti-cloud first messaging sounded a bit funny and hypocritical when both companies have a foot in public clouds, advocating many of their customers in the clouds. So what gives?

That A16Z report

For a numbers of years, many fear criticizing the public cloud services openly. For me, there are the 3 C bombs in public clouds.

  • Costs
  • Complexity
  • Control (lack of it)

Yeah, we would hear of a few mini heart attacks here and there about clouds overcharging customers, and security fallouts. But vendors then who were looking up to the big 3 public clouds as deities, rarely chastise them for the errors. Until recently.

The Cost of Cloud, a Trillion Dollar Paradox” released by revered VC firm Andreessen Horowitz in May 2021 opened up the vocals of several vendors who are now emboldened to make stronger comments about the shortcomings of public cloud services. The report has made it evident that public cloud services are not panacea of all IT woes.

The report has made it evident that public cloud services are not panacea of all IT woes. And looking at the trends, this will only get louder.

Use ours first. We are better

It is pretty obvious that both Nutanix™ and HPE have bigger stakes outside the public cloud IaaS (infrastructure-as-a-service) offerings. It is also pretty obvious that both are not the biggest players in this cloud-first economy. Given their weights in the respective markets, they are leveraging their positions to swing the mindsets to their turf where they can win.

“Use our technology and services. We are better, even though we are also in the public clouds.”

Not a zero sum game

But IT services and IT technologies are not a zero sum game. Both on-premises IT services and complementary public cloud services can co-exist. Both can leverage on each other’s strengths and support each other’s weaknesses, if you know how to blend and assimilate the best of both worlds. Hybrid cloud is the new black.

Gartner Hype Cycle

The IT pendulum swings. Technology hype goes fever pitch. Everyone thinks there is a cure for cancer. Reality sets in. They realize that they were wrong (not completely) or right (not completely). Life goes on. The Gartner® Hype Cycle explains this very well.

The cloud is OK

There are many merits having IT services provisioned in the cloud. Agility, pay-per-use, OPEX, burst traffic, seemingly unlimited resources and so. You can read more about it at Benefits of Cloud Computing: The pros and cons. Even AWS agrees to Three things every business needs from hybrid cloud, perhaps to the chagrin of these naysayers.

I opined that there is no single solution for everything. There is no Best Storage Technology Ever (a snarky post). And so, I believe there is nothing wrong of Nutanix™ and HPE, and maybe others, being hypocritical of their cloud and non-cloud technology offerings. These companies are adjusting and adapting to the changing landscapes of the IT environments, but it is best not to confuse the customers what tactics, strategy and vision are. Inconsistencies in messaging diminishes trust.

 

 

Is Software Defined right for Storage?

George Herbert Leigh Mallory, mountaineer extraordinaire, was once asked “Why did you want to climb Mount Everest?“, in which he replied “Because it’s there“. That retort demonstrated the indomitable human spirit and probably exemplified best the relationship between the human being’s desire to conquer the physical limits of nature. The software of humanity versus the hardware of the planet Earth.

Juxtaposing, similarities can be said between software and hardware in computer systems, in storage technology per se. In it, there are a few schools of thoughts when it comes to delivering storage services with the notable ones being the storage appliance model and the software-defined storage model.

There are arguments, of course. Some are genuinely partisan but many a times, these arguments come in the form of the flavour of the moment. I have experienced in my past companies touting the storage appliance model very strongly in the beginning, and only to be switching to a “software company” chorus years after that. That was what I meant about the “flavour of the moment”.

Software Defined Storage

Continue reading

Layers in Storage – For better or worse

Storage arrays and storage services are built upon by layers and layers beneath its architecture. The physical components of hard disk drives and solid states are abstracted into RAID volumes, virtualized into other storage constructs before they are exposed as shares/exports, LUNs or objects to the network.

Everyone in the storage networking industry, is cognizant of the layers and it is the foundation of knowledge and experience. The public cloud storage services side is the same, albeit more opaque. Nevertheless, both have layers.

In the early 2000s, SNIA® Technical Council outlined a blueprint of the SNIA® Shared Storage Model, a framework describing layers and properties of a storage system and its services. It was similar to the OSI 7-layer model for networking. The framework helped many industry professionals and practitioners shaped their understanding and the development of knowledge in their respective fields. The layering scheme of the SNIA® Shared Storage Model is shown below:

SNIA Shared Storage Model – The layering scheme

Storage vendors layering scheme

While SNIA® storage layers were generic and open, each storage vendor had their own proprietary implementation of storage layers. Some of these architectures are simple, but some, I find a bit too complex and convoluted.

Here is an example of the layers of the Automated Volume Management (AVM) architecture of the EMC® Celerra®.

EMC Celerra AVM Layering Scheme

I would often scratch my head about AVM. Disks were grouped into RAID groups, which are LUNs (Logical Unit Numbers). Then they were defined as Celerra® dvols (disk volumes), and stripes of the dvols were consolidated into a storage pool.

From the pool, a piece of a storage capacity construct, called a slice volume, were combined with other slice volumes into a metavolume which eventually was presented as a file system to the network and their respective NAS clients. Explaining this took an effort because I was the IP Storage product manager for EMC® between 2007 – 2009. It was a far cry from the simplicity of NetApp® ONTAP 7 architecture of RAID groups and volumes, and the WAFL® (Write Anywhere File Layout) filesystem.

Another complicated layered framework I often gripe about is Ceph. Here is a look of how the layers of CephFS is constructed.

Ceph Storage Layered Framework

I work with the OpenZFS filesystem a lot. It is something I am rather familiar with, and the layered structure of the ZFS filesystem is essentially simpler.

Storage architecture mixology

Engineers are bizarre when they get too creative. They have a can do attitude that transcends the boundaries of practicality sometimes, and boggles many minds. This is what happens when they have their own mixology ideas.

Recently I spoke to two magnanimous persons who had the idea of providing Ceph iSCSI LUNs to the ZFS filesystem in order to use the simplicity of NAS file sharing capabilities in TrueNAS® CORE. From their own words, Ceph NAS capabilities sucked. I had to draw their whole idea out in a Powerpoint and this is the architecture I got from the conversation.

There are 3 different storage subsystems here just to provide NAS. As if Ceph layers aren’t complicated enough, the iSCSI LUNs from Ceph are presented as Cinder volumes to the KVM hypervisor (or VMware® ESXi) through the Cinder driver. Cinder is the persistent storage volume subsystem of the Openstack® project. The Cinder volumes/hypervisor datastore are virtualized as vdisks to the respective VMs installed with TrueNAS® CORE and OpenZFS filesystem. From the TrueNAS® CORE, shares and exports are provisioned via the SMB and NFS protocols to Windows and Linux respectively.

It works! As I was told, it worked!

A.P.P.A.R.M.S.C. considerations

Continuing from the layered framework described above for NAS, other aspects beside the technical work have to be considered, even when it can work technically.

I often use a set of diligent data storage focal points when considering a good storage design and implementation. This is the A.P.P.A.R.M.S.C. Take for instance Protection as one of the points and snapshot is the technology to use.

Snapshots can be executed at the ZFS level on the TrueNAS® CORE subsystem. Snapshots can be trigged at the volume level in Openstack® subsystem and likewise, rbd snapshots at the Ceph subsystem. The question is, which snapshot at which storage subsystem is the most valuable to the operations and business? Do you run all 3 snapshots? How do you execute them in succession in a scheduled policy?

In terms of performance, can it truly maximize its potential? Can it churn out the best IOPS, and deliver at wire speed? What is the latency we can expect with so many layers from 3 different storage subsystems?

And supporting this said architecture would be a nightmare. Where do you even start the troubleshooting?

Those are just a few considerations and questions to think about when such a layered storage architecture along. IMHO, such a design was over-engineered. I was tempted to say “Just because you can, doesn’t mean you should

Elegance in Simplicity

Einstein (I think) quoted:

Einstein’s quote on simplicity and complexity

I am not saying that having too many layers is wrong. Having a heavily layered architecture works for many storage solutions out there, where they are often masked with a simple and intuitive UI. But in yours truly point of view, as a storage architecture enthusiast and connoisseur, there is beauty and elegance in simple designs.

The purpose here is to promote better understanding of the storage layers, and how they integrate and interact with each other to deliver the data services to the network. In the end, that is how most storage architectures are built.

 

Give back or no give

[ Disclosure: I work for iXsystems™ Inc. Views and opinions are my own. ]

If my memory served me right, I recalled the illustrious leader of the Illumos project, Garrett D’Amore ranting about companies, big and small, taking OpenZFS open source codes and projects to incorporate into their own technology but hardly ever giving back to the open source community. That was almost 6 years ago.

My thoughts immediately go back to the days when open source was starting to take off back in the early 2000s. Oracle 9i database had just embraced Linux in a big way, and the book by Eric S. Raymond, “The Cathedral and The Bazaar” was a big hit.

The Cathedral & The Bazaar by Eric S. Raymond

Since then, the blooming days of proprietary software world began to wilt, and over the next twenty plus year, open source software has pretty much taken over the world. Even Microsoft®, the ruthless ruler of the Evil Empire caved in to some of the open source calls. The Microsoft® “I Love Linux” embrace definitely gave the victory feeling of the Rebellion win over the Empire. Open Source won.

Open Source bag of worms

Even with the concerted efforts of the open source communities and projects, there were many situations which have caused frictions and inadvertently, major issues as well. There are several open source projects licenses, and they are not always compatible when different open source projects mesh together for the greater good.

On the storage side of things, 2 “incidents” caught the attention of the masses. For instance, Linus Torvalds, Linux BDFL (Benevolent Dictator for Life) and emperor supremo said “Don’t use ZFS” partly due to the ignorance and incompatibility of Linux GPL (General Public License) and ZFS CDDL (Common Development and Distribution License). That ruffled some feathers amongst the OpenZFS community that Matt Ahrens, the co-creator of the ZFS file system and OpenZFS community leader had to defend OpenZFS from Linus’ comments.

Continue reading

The instant value of Open Source Storage

[ Full disclosure: I work for iXsystems™ . Opinions and views are mine. ]

TrueNAS Open Storage logo

The story began …

It was 2011. A friend of a friend called me out of the blue. He was rambling about his company’s storage needs. I recalled vividly that he wanted 100TB, and Dell and HP (before HPE) were hopeless doing NAS (network attached storage) in an Apple environment. They assembled a Frankenstein-ish NAS and plastered a price over MYR$100K around it.

In his environment, the Apple workstations were connected to dozens of WD Cloud Book storage (whatever it was called back then), daisy chained via Firewire to each other. I recalled one workstation had 3 WD “books” daisy chained together. They got the exploding storage needs but performance sucked. With every 2nd or 3rd user, access to files were at a snail pace, taking up to more than 2 minutes to open a file sometimes.

At that time, my old colleague at Sun was fervently talking about ZFS and OpenSolaris™. I told him about this opportunity, and so we began. It was him who used the word “crafter”. “We are not building“, he said, “we are crafting“. He was right.

OpenSolaris logo

Continue reading

Tiger Bridge extending NTFS to the cloud

[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]

The NTFS File System has been around for more than 3 decades. It has been the most important piece of the Microsoft Windows universe, although Microsoft is already replacing it with ReFS (Resilient File System) since Windows Server 2012. Despite best efforts from Microsoft, issues with ReFS remain and thus, NTFS is still the most reliable and go-to file system in Windows.

First reaction to Tiger Technology

When Tiger Technology was first announced as a sponsor to Storage Field Day 19, I was excited of the company with such a cool name. Soon after, I realized that I have encountered the name before in the media and entertainment space.


Continue reading