Stating the case for a Storage Appliance approach

I was in Indonesia last week to meet with iXsystems™‘ partner PT Maha Data Solusi. I had the wonderful opportunity to meet with many people there and one interesting and often-replayed question arose. Why aren’t iX doing software-defined-storage (SDS)? It was a very obvious and deliberate question.

After all, iX is already providing the free use of the open source TrueNAS® CORE software that runs on many x86 systems as an SDS solution and yet commercially, iX sell the TrueNAS® storage appliances.

This argument between a storage appliance model and a storage storage only model has been debated for more than a decade, and it does come into my conversations on and off. I finally want to address this here, with my own views and opinions. And I want to inform that I am open to both models, because as a storage consultant, both have their pros and cons, advantages and disadvantages. Up front I gravitate to the storage appliance model, and here’s why.

My story of the storage appliance begins …

Back in the 90s, most of my work was on Fibre Channel and NFS. iSCSI has not existed yet (iSCSI was ratified in 2003). It was almost exclusively on the Sun Microsystems® enterprise storage with Sun’s software resell of the Veritas® software suite that included the Sun Volume Manager (VxVM), Veritas® Filesystem (VxFS), Veritas® Replication (VxVR) and Veritas® Cluster Server (VCS). I didn’t do much Veritas® NetBackup (NBU) although I was trained at Veritas® in Boston in July 1997 (I remembered that 2 weeks’ trip fondly). It was just over 2 months after Veritas® acquired OpenVision. Backup Plus was the NetBackup.

Between 1998-1999, I spent a lot of time working Sun NFS servers. The prevalent networking speed at that time was 100Mbits/sec. And I remember having this argument with a Sun partner engineer by the name of Wong Teck Seng. Teck Seng was an inquisitive fella (still is) and he was raving about this purpose-built NFS server he knew about and he shared his experience with me. I detracted him, brushing aside his always-on tech orgasm, and did not find great things about a NAS storage appliance. Auspex™ was big then, and I knew of them.

I joined NetApp® as Malaysia’s employee #2. It was an odd few months working with a storage appliance but after a couple of months, I started to understand and appreciate the philosophy. The storage Appliance Model made sense to me, even through these days.

Continue reading

As Disk Drive capacity gets larger (and larger), the resilient Filesystem matters

I just got home from the wonderful iXsystems™ Sales Summit in Knoxville, Tennessee. The key highlight was to christian the opening of iXsystems™ Maryville facility, the key operations center that will house iX engineering, support and part of marketing as well. News of this can be found here.

iX datacenter in the new Maryville facility

Western Digital® has always been a big advocate of iX, and at the Summit, they shared their hard disk drives HDD, solid state drives SSD, and other storage platforms roadmaps. I felt like a kid a candy store because I love all these excitements in the disk drive industry. Who says HDDs are going to be usurped by SSDs?

Several other disk drive manufacturers, including Western Digital®, have announced larger capacity drives. Here are some news of each vendor in recent months

Other than the AFR (annualized failure rates) numbers published by Backblaze every quarter, the Capacity factor has always been a measurement of high interest in the storage industry.

Continue reading

Unstructured Data Observability with Datadobi StorageMAP

Let’s face it. Data is bursting through its storage seams. And every organization now is storing too much data that they don’t know they have.

By 2025, IDC predicts that 80% the world’s data will be unstructured. IDC‘s report Global Datasphere Forecast 2021-2025 will see the global data creation and replication capacity expand to 181 zettabytes, an unfathomable figure. Organizations are inundated. They struggle with data growth, with little understanding of what data they have, where the data is residing, what to do with the data, and how to manage the voluminous data deluge.

The simple knee-jerk action is to store it in cloud object storage where the price of storage is $0.0000xxx/GB/month. But many IT departments in these organizations often overlook the fact that that the data they have parked in the cloud require movement between the cloud and on-premises. I have been involved in numerous discussions where the customers realized that they moved the data in the cloud moved too frequently. Often it was an erred judgement or short term blindness (blinded by the cheap storage costs no doubt), further exacerbated by the pandemic. These oversights have resulted in expensive and painful monthly API calls and egress fees. Welcome to reality. Suddenly the cheap cloud storage doesn’t sound so cheap after all.

The same can said about storing non-active unstructured data on primary storage. Many organizations have not been disciplined to practise good data management. The primary Tier 1 storage becomes bloated over time, grinding sluggishly as the data capacity grows. I/O processing becomes painfully slow and backup takes longer and longer. Sounds familiar?

The A in ABC

I brought up the ABC mantra a few blogs ago. A is for Archive First. It is part of my data protection consulting practice conversation repertoire, and I use it often to advise IT organizations to be smart with their data management. Before archiving (some folks like to call it tiering, but I am not going down that argument today), we must know what to archive. We cannot blindly send all sorts of junk data to the secondary or tertiary storage premises. If we do that, it is akin to digging another hole to fill up the first hole.

We must know which unstructured data to move replicate or sync from the Tier 1 storage to a second (or third) less taxing storage premises. We must be able to see this data, observe its behaviour over time, and decide the best data management practice to apply to this data. Take note that I said best data management practice and not best storage location in the previous sentence. There has to be a clear distinction that a data management strategy is more prudent than to a “best” storage premises. The reason is many organizations are ignorantly thinking the best storage location (the thought of the “cheapest” always seems to creep up) is a good strategy while ignoring the fact that data is like water. It moves from premises to premises, from on-prem to cloud, cloud to other cloud. Data mobility is a variable in data management.

Continue reading

Building Trust in the Storage Brand

Trust is everything. When done right, the brand is trust.

One Wikibon article last month “Does Hardware (still) Matter?” touched on my sentiments and hit close to the heart. As the world becomes more and more data driven and cloud-centric, the prominence of IT infrastructure has diminished from the purview of the boardroom. The importance of IT infrastructure cannot be discounted but in this new age, storage infrastructure has become invisible.

In the seas of both on-premises and hybrid storage technology solutions, everyone is trying to stand out, trying to eke the minutest ounces of differentiation and advantage to gain the customer’s micro-attention. With all the drum beatings, the loyalty of the customer can switch in an instance unless we build trust.

I ponder a few storage industry variables that help build trust.

Open source Communities and tribes

During the hey-days of proprietary software and OSes, protectionism was key to guarding the differentiations and the advantages. Licenses were common, and some were paired with the hardware hostid to create that “power combination”. And who can forget those serial dongles license keys? Urgh!!

Since the open source movement (Read The Cathedral and the Bazaar publication) began, the IT world has begun to trust software and OSes more and more. Open Source communities grew and technology tribes were formed in all types of niches, including storage software. Trust grew because the population of the communities kept the vendors honest. Gone are the days of the Evil Empire. Even Microsoft® became a ‘cool kid’.

TRUST

One open source storage filesystem I worked extensively on is OpenZFS. From its beginnings after Open Solaris® (remember build 134), becoming part of the Illumos project and then later in FreeBSD® and Linux upstream. Trust in OpenZFS was developed over time because of the open source model. It has spawned many storage projects including FreeNAS™ which later became TrueNAS®.

Continue reading

The young report card on Decentralized Storage

I kept this blog in my queue for over 4 months. I was reluctant to publish it because I thought the outrageous frenzies of NFTs (non-fungible tokens), metaverses and web3 were convoluting the discussions on the decentralized storage topic. 3 weeks back, a Google Trends search for these 3 opaque terms over 90 days showed that the worldwide fads were waning. Here was the Google Trends output on April 2, 2022:

Google Trends on NFT, metaverse and web3

Decentralized storage intrigues me. I like to believe in its potential and I often try to talk to people to strengthen the narratives, and support its adoption where it fits. But often, the real objectives of decentralized storage are obfuscated by the polarized conversations about cryptocurrencies that are pegged to their offerings, NFTs (non-fungible tokens), DAOs (decentralized autonomous organizations) and plenty of hyperboles with bewildering facts as well.

But I continue to seek sustainable conversations about decentralized storage without the sway of the NFTs or the cryptos. After dipping in my toes and experiencing with HODLers, and looking at the return to sanity, I believe we can discuss decentralized storage with better clarity now. The context is to position decentralized storage to the mainstream, specifically to business organizations already immersed in centralized storage. Here is my fledgling report card on decentralized storage.

Continue reading

Backup – Lest we forget

World Backup Day – March 31st

Last week was World Backup Day. It is on March 31st every year so that you don’t lose your data and become an April’s Fool the next day.

Amidst the growing awareness of the importance of backup, no thanks to the ever growing destructive nature of ransomware, it is important to look into other aspects of data protection – both a data backup/recovery and a data security –  point of view as well.

3-2-1 Rule, A-B-C and Air Gaps

I highlighted the basic 3-2-1 rule before. This must always be paired with a set of practised processes and policies to cultivate all stakeholders (aka the people) in the organization to understand the importance of protecting the data and ensuring data recoverability.

The A-B-C is to look at the production dataset and decide if the data should be stored in the Tier 1 storage. In most cases, the data becomes less active and these datasets may be good candidates to be archived. Once archived, the production dataset is smaller and data backup operations become lighter, faster and have positive causation as well.

Air gaps have returned to prominence since the heightened threats on data in recent years. The threats have pushed organizations to consider doing data offsite and offline with air gaps. Cost considerations and speed of recovery can be of concerns, and logical air gaps are also gaining style as an acceptable extra layer of data. protection.

Backup is not total Data Protection cyberdefence

If we view data protection more holistically and comprehensively, backup (and recovery) is not the total data protection solution. We must ignore the fancy rhetorics of the technology marketers that backup is the solution to ensure data protection because there is much more than that.

The well respected NIST (National Institute of Standards and Technology) Cybersecurity Framework places Recovery (along with backup) as the last pillar of its framework.

NIST Cybersecurity Framework

Continue reading

Nakivo Backup Replication architecture and installation on TrueNAS – Part 1

Backup and Replication software have received strong mandates in organizations with enterprise mindsets and vision. But lower down the rung, small medium organizations are less invested in backup and replication software. These organizations know full well that they must backup, replicate and protect their servers, physical and virtual, and also new workloads in the clouds, given the threat of security breaches and ransomware is looming larger and larger all the time. But many are often put off by the cost of implementing and deploying a Backup and Replication software.

So I explored one of the lesser known backup and recovery software called Nakivo® Backup and Replication (NBR) and took the opportunity to build a backup and replication appliance in my homelab with TrueNAS®. My objective was to create a cost effective option for small medium organizations to enjoy enterprise-grade protection and recovery without the hefty price tag.

This blog, Part 1, writes about the architecture overview of Nakivo® and the installation of the NBR software in TrueNAS® to bake in and create the concept of a backup and replication appliance. Part 2, in a future blog post, will cover the administrative and operations usage of NBR.

Continue reading

Please cultivate 3-2-1 and A-B-C of Data Management

My Sunday morning was muddled 2 weeks ago. There was a frenetic call from someone whom I knew a while back and he needed some advice. Turned out that his company’s files were encrypted and the “backups” (more on this later) were gone. With some detective work, I found that their files were stored in a Synology® NAS, often accessed via QuickConnect remotely, and “backed up” to Microsoft® Azure. I put “Backup” in inverted commas because their definition of “backup” was using Synology®’s Cloud Sync to Azure. It is not a true backup but a file synchronization service that often mislabeled as a data protection backup service.

All of his company’s projects files were encrypted and there were no backups to recover from. It was a typical ransomware cluster F crime scene.

I would have gloated because many of small medium businesses like his take a very poor and lackadaisical attitude towards good data management practices. No use crying over spilled milk when prevention is better than cure. But instead of investing early in the prevention, the cure would likely be 3x more expensive. And in this case, he wanted to use Deloitte® recovery services, which I did not know existed. Good luck with the recovery was all I said to him after my Sunday morning was made topsy turvy of sorts.

NAS is the ransomware goldmine

I have said it before and I am saying it again. NAS devices, especially the consumer and prosumer brands, are easy pickings because there was little attention paid to implement a good data management practice either by the respective vendor or the end users themselves. 2 years ago I was already seeing a consistent pattern of the heightened ransomware attacks on NAS devices, especially the NAS devices that proliferated the small medium businesses market segment.

The WFH (work from home) practice trigged by the Covid-19 pandemic has made NAS devices essential for businesses. NAS are the workhorses of many businesses after all.  The ease of connecting from anywhere with features similar to the Synology® QuickConnect I mentioned earlier, or through VPNs (virtual private networks), or a self created port forwarding (for those who wants to save a quick buck [ sarcasm ]), opened the doors to bad actors and easy ransomware incursions. Good data management practices are often sidestepped or ignored in exchange for simplicity, convenience, and trying to save foolish dollars. Until ….

Continue reading

Celebrating MinIO

Essentially MinIO is a web server …

I vaguely recalled Anand Babu Periasamy (AB as he is known), the CEO of MinIO saying that when I first met him in 2017. I was fresh “playing around” with MinIO and instantly I fell in love with software technology. Wait a minute. Object storage wasn’t supposed to be so easy. It was not supposed to be that simple to set up and use, but MinIO burst into my storage universe like the birth of the Infinity Stones. There was a eureka moment. And I was attending one of the Storage Field Days in the US shortly after my MinIO discovery in late 2017. What an opportunity!

I could not recall how I made the appointment to meeting MinIO, but I recalled myself taking an Uber to their cosy office on University Avenue in Palo Alto to meet. Through Andy Watson (one of the CTOs then), I was introduced to AB, Garima Kapoor, MinIO’s COO and his wife, Frank Wessels, Zamin (one of the business people who is no longer there) and Ugur Tigli (East Coast CTO) who was on the Polycom. I was awe struck.

Last week, MinIO scored a major Series B round funding of USD103 million. It was delayed by the pandemic because I recalled Garima telling me that the funding was happening in 2020. But I think the delay made it better, because the world now is even more ready for MinIO than ever before.

Continue reading

Crash consistent data recovery for ZFS volumes

While TrueNAS® CORE and TrueNAS® Enterprise are more well known for its NAS (network attached storage) prowess, many organizations are also confidently placing their enterprise applications such as hypervisors and databases on TrueNAS® via SANs (storage area networks) as well. Both iSCSI and Fibre Channel™ (selected TrueNAS® Enterprise storage models) protocols are supported well.

To reliably protect these block-based applications via the SAN protocols, ZFS snapshot is the key technology that can be dependent upon to restore the enterprise applications quickly. However, there are still some confusions when it comes to the state of recovery from the ZFS snapshots. On that matter, this situations are not unique to the ZFS environments because as with many other storage technologies, the confusion often stem from the (mis)understanding of the consistency state of the data in the backups and in the snapshots.

Crash Consistency vs Application Consistency

To dispel this misunderstanding, we must first begin with the understanding of a generic filesystem agnostic snapshot. It is a point-in-time copy, just like a data copy on the tape or in the disks or in the cloud backup. It is a complete image of the data and the state of the data at the storage layer at the time the storage snapshot was taken. This means that the data and metadata in this snapshot copy/version has a consistent state at that point in time. This state is frozen for this particular snapshot version, and therefore it is often labeled as “crash consistent“.

In the event of a subsystem (application, compute, storage, rack, site, etc) failure or a power loss, data recovery can be initiated using the last known “crash consistent” state, i.e. restoring from the last good backup or snapshot copy. Depending on applications, operating systems, hypervisors, filesystems and the subsystems (journals, transaction logs, protocol resiliency primitives etc) that are aligned with them, some workloads will just continue from where it stopped. It may already have some recovery mechanisms or these workloads can accept data loss without data corruption and inconsistencies.

Some applications, especially databases, are more sensitive to data and state consistencies. That is because of how these applications are designed. Take for instance, the Oracle® database. When an Oracle® database instance is online, there is an SGA (system global area) which handles all the running mechanics of the database. SGA exists in the memory of the compute along with transaction logs, tablespaces, and open files that represent the Oracle® database instance. From time to time, often measured in seconds, the state of the Oracle® instance and the data it is processing have to be synched to non-volatile, persistent storage. This commit is important to ensure the integrity of the data at all times.

Continue reading