Building Trust in the Storage Brand

Trust is everything. When done right, the brand is trust.

One Wikibon article last month “Does Hardware (still) Matter?” touched on my sentiments and hit close to the heart. As the world becomes more and more data driven and cloud-centric, the prominence of IT infrastructure has diminished from the purview of the boardroom. The importance of IT infrastructure cannot be discounted but in this new age, storage infrastructure has become invisible.

In the seas of both on-premises and hybrid storage technology solutions, everyone is trying to stand out, trying to eke the minutest ounces of differentiation and advantage to gain the customer’s micro-attention. With all the drum beatings, the loyalty of the customer can switch in an instance unless we build trust.

I ponder a few storage industry variables that help build trust.

Open source Communities and tribes

During the hey-days of proprietary software and OSes, protectionism was key to guarding the differentiations and the advantages. Licenses were common, and some were paired with the hardware hostid to create that “power combination”. And who can forget those serial dongles license keys? Urgh!!

Since the open source movement (Read The Cathedral and the Bazaar publication) began, the IT world has begun to trust software and OSes more and more. Open Source communities grew and technology tribes were formed in all types of niches, including storage software. Trust grew because the population of the communities kept the vendors honest. Gone are the days of the Evil Empire. Even Microsoft® became a ‘cool kid’.

TRUST

One open source storage filesystem I worked extensively on is OpenZFS. From its beginnings after Open Solaris® (remember build 134), becoming part of the Illumos project and then later in FreeBSD® and Linux upstream. Trust in OpenZFS was developed over time because of the open source model. It has spawned many storage projects including FreeNAS™ which later became TrueNAS®.

Continue reading

Nakivo Backup Replication architecture and installation on TrueNAS – Part 1

Backup and Replication software have received strong mandates in organizations with enterprise mindsets and vision. But lower down the rung, small medium organizations are less invested in backup and replication software. These organizations know full well that they must backup, replicate and protect their servers, physical and virtual, and also new workloads in the clouds, given the threat of security breaches and ransomware is looming larger and larger all the time. But many are often put off by the cost of implementing and deploying a Backup and Replication software.

So I explored one of the lesser known backup and recovery software called Nakivo® Backup and Replication (NBR) and took the opportunity to build a backup and replication appliance in my homelab with TrueNAS®. My objective was to create a cost effective option for small medium organizations to enjoy enterprise-grade protection and recovery without the hefty price tag.

This blog, Part 1, writes about the architecture overview of Nakivo® and the installation of the NBR software in TrueNAS® to bake in and create the concept of a backup and replication appliance. Part 2, in a future blog post, will cover the administrative and operations usage of NBR.

Continue reading

Crash consistent data recovery for ZFS volumes

While TrueNAS® CORE and TrueNAS® Enterprise are more well known for its NAS (network attached storage) prowess, many organizations are also confidently placing their enterprise applications such as hypervisors and databases on TrueNAS® via SANs (storage area networks) as well. Both iSCSI and Fibre Channel™ (selected TrueNAS® Enterprise storage models) protocols are supported well.

To reliably protect these block-based applications via the SAN protocols, ZFS snapshot is the key technology that can be dependent upon to restore the enterprise applications quickly. However, there are still some confusions when it comes to the state of recovery from the ZFS snapshots. On that matter, this situations are not unique to the ZFS environments because as with many other storage technologies, the confusion often stem from the (mis)understanding of the consistency state of the data in the backups and in the snapshots.

Crash Consistency vs Application Consistency

To dispel this misunderstanding, we must first begin with the understanding of a generic filesystem agnostic snapshot. It is a point-in-time copy, just like a data copy on the tape or in the disks or in the cloud backup. It is a complete image of the data and the state of the data at the storage layer at the time the storage snapshot was taken. This means that the data and metadata in this snapshot copy/version has a consistent state at that point in time. This state is frozen for this particular snapshot version, and therefore it is often labeled as “crash consistent“.

In the event of a subsystem (application, compute, storage, rack, site, etc) failure or a power loss, data recovery can be initiated using the last known “crash consistent” state, i.e. restoring from the last good backup or snapshot copy. Depending on applications, operating systems, hypervisors, filesystems and the subsystems (journals, transaction logs, protocol resiliency primitives etc) that are aligned with them, some workloads will just continue from where it stopped. It may already have some recovery mechanisms or these workloads can accept data loss without data corruption and inconsistencies.

Some applications, especially databases, are more sensitive to data and state consistencies. That is because of how these applications are designed. Take for instance, the Oracle® database. When an Oracle® database instance is online, there is an SGA (system global area) which handles all the running mechanics of the database. SGA exists in the memory of the compute along with transaction logs, tablespaces, and open files that represent the Oracle® database instance. From time to time, often measured in seconds, the state of the Oracle® instance and the data it is processing have to be synched to non-volatile, persistent storage. This commit is important to ensure the integrity of the data at all times.

Continue reading

Control your Files. Control your Sovereignty.

Data residency, data sovereignty, data localization – the trio of data compliance and governance – have been on my mind a lot lately. I am seeing a disturbing trend. “Splinternet” has taken a hurried and hastened pace. We are now seeing many countries drawing up digital boundaries in the name of data privacy and data protection with sovereign laws and regulations. Besides, these digital demarcation along the lines with data definitions, digital “colonization” is a strong undercurrent as developing countries are accepting larger and more powerful foreign powers into their playpen.

Public cloud services transcend national borders. The breakneck speed in the adoption of public cloud services is causing anxieties and concerns with conservative governments everywhere. On the flip side of the coin, commerce has certainly flourished and bloomed as global wide collaborations bring new opportunities, new markets – all for capitalism and growth.

[ Note: While we are on this debacle, the voices of decentralization are getting louder as well, but that is a topic for another day ]

Where are your data files now?

Continue reading

Don’t go to the Clouds. Come back!

Almost in tandem last week, Nutanix™ and HPE appeared to have made denigrated comments about Cloud First mandates of many organizations today. Nutanix™ took to the annual .NEXT conference to send the message that cloud is wasteful. HPE campaigned against a UK Public Sector “Cloud First” policy.

Cloud First or Cloud Not First

The anti-cloud first messaging sounded a bit funny and hypocritical when both companies have a foot in public clouds, advocating many of their customers in the clouds. So what gives?

That A16Z report

For a numbers of years, many fear criticizing the public cloud services openly. For me, there are the 3 C bombs in public clouds.

  • Costs
  • Complexity
  • Control (lack of it)

Yeah, we would hear of a few mini heart attacks here and there about clouds overcharging customers, and security fallouts. But vendors then who were looking up to the big 3 public clouds as deities, rarely chastise them for the errors. Until recently.

The Cost of Cloud, a Trillion Dollar Paradox” released by revered VC firm Andreessen Horowitz in May 2021 opened up the vocals of several vendors who are now emboldened to make stronger comments about the shortcomings of public cloud services. The report has made it evident that public cloud services are not panacea of all IT woes.

The report has made it evident that public cloud services are not panacea of all IT woes. And looking at the trends, this will only get louder.

Use ours first. We are better

It is pretty obvious that both Nutanix™ and HPE have bigger stakes outside the public cloud IaaS (infrastructure-as-a-service) offerings. It is also pretty obvious that both are not the biggest players in this cloud-first economy. Given their weights in the respective markets, they are leveraging their positions to swing the mindsets to their turf where they can win.

“Use our technology and services. We are better, even though we are also in the public clouds.”

Not a zero sum game

But IT services and IT technologies are not a zero sum game. Both on-premises IT services and complementary public cloud services can co-exist. Both can leverage on each other’s strengths and support each other’s weaknesses, if you know how to blend and assimilate the best of both worlds. Hybrid cloud is the new black.

Gartner Hype Cycle

The IT pendulum swings. Technology hype goes fever pitch. Everyone thinks there is a cure for cancer. Reality sets in. They realize that they were wrong (not completely) or right (not completely). Life goes on. The Gartner® Hype Cycle explains this very well.

The cloud is OK

There are many merits having IT services provisioned in the cloud. Agility, pay-per-use, OPEX, burst traffic, seemingly unlimited resources and so. You can read more about it at Benefits of Cloud Computing: The pros and cons. Even AWS agrees to Three things every business needs from hybrid cloud, perhaps to the chagrin of these naysayers.

I opined that there is no single solution for everything. There is no Best Storage Technology Ever (a snarky post). And so, I believe there is nothing wrong of Nutanix™ and HPE, and maybe others, being hypocritical of their cloud and non-cloud technology offerings. These companies are adjusting and adapting to the changing landscapes of the IT environments, but it is best not to confuse the customers what tactics, strategy and vision are. Inconsistencies in messaging diminishes trust.

 

 

Where are your files living now?

[ This is Part One of a longer conversation ]

EMC2 (before the Dell® acquisition) in the 2000s had a tagline called “Where Information Lives™**. This was before the time of cloud storage. The tagline was an adage of enterprise data storage, proper and contemporaneous to the persistent narrative at the time – Data Consolidation. Within the data consolidation stories, thousands of files and folders moved about the networks of the organizations, from servers to clients, clients to servers. NAS (Network Attached Storage) was, and still is the work horse of many, many organizations.

[ **Side story ] There was an internal anti-EMC joke within NetApp® called “Information has a new address”.

EMC tagline “Where Information Lives”

This was a time where there were almost no concerns about Shadow IT; ransomware were less known; and most importantly, almost everyone knew where their files and folders were, more or less (except in Oil & Gas upstream – to be told in later in this blog). That was because there were concerted attempts to consolidate data, and inadvertently files and folders, in the organization.

Even when these organizations were spread across the world, there were distributed file technologies at the time that could deliver files and folders in an acceptable manner. Definitely not as good as what we have today in a cloudy world, but acceptable. I personally worked a project setting up Andrew File Systems for Intel® in Penang in the mid-90s, almost joined Tacit Networks in the mid-2000s, dabbled on Microsoft® Distributed File System with NetApp® and Windows File Servers while fixing the mountains of issues in deploying the worldwide GUSto (Global Unified Storage) Project in Shell 2006. Somewhere in my chronological listings, Acopia Networks (acquired by F5) and of course, EMC2 Rainfinity and NetApp® NuView OEM, Virtual File Manager.

The point I am trying to make here is most IT organizations had a good grip of where the files and folders were. I do not think this is very true anymore. Do you know where your files and folders are living today? 

Continue reading

Storage IO straight to GPU

The parallel processing power of the GPU (Graphics Processing Unit) cannot be denied. One year ago, nVidia® overtook Intel® in market capitalization. And today, they have doubled their market cap lead over Intel®,  [as of July 2, 2021] USD$510.53 billion vs USD$229.19 billion.

Thus it is not surprising that storage architectures are changing from the CPU-centric paradigm to take advantage of the burgeoning prowess of the GPU. And 2 announcements in the storage news in recent weeks have caught my attention – Windows 11 DirectStorage API and nVidia® Magnum IO GPUDirect® Storage.

nVidia GPU

Exciting the gamers

The Windows DirectStorage API feature is only available in Windows 11. It was announced as part of the Xbox® Velocity Architecture last year to take advantage of the high I/O capability of modern day NVMe SSDs. DirectStorage-enabled applications and games have several technologies such as D3D Direct3D decompression/compression algorithm designed for the GPU, and SFS Sampler Feedback Streaming that uses the previous rendered frame results to decide which higher resolution texture frames to be loaded into memory of the GPU and rendered for the real-time gaming experience.

Continue reading

My 2-day weekend with Nextcloud on FreeNAS

In recent weeks, I have been asked by friends and old cust0mers on how to extend their NAS shared drives to work-from-home, the new reality. Malaysia went into a full lockdown as of June 1st several days ago.

I have written about file synchronization stories before but I have never done a Nextcloud blog. I have little experience with TrueNAS® CORE Nextcloud plugin and this was a good weekend to build it up from scratch with Virtualbox with FreeNAS™ 11.2U5 (because my friend was using that version).

[ Note ] FreeNAS™ 11.2U5 has been EOLed.

Nextcloud login screen

So, here it how it went for my little experiment. FYI, this is not a How-to guide. That will come later after I have put all my notes together with screenshots and all. This is just a collection of my thoughts while setting up Nextcloud on FreeNAS™.

Dropbox® is expensive

Using cloud storage with file sync and share capability is not exactly a cheap thing especially when you are a small medium sized business or a school or a charity organization. Here is the pricing table for Dropbox® for Business :

Dropbox for business pricing

I am using Dropbox® as the example here but the same can be said for OneDrive or Google Drive and others. The pricing can quickly add up when the price is calculated per user per month.

Continue reading

Before we say good bye to AFP

The Apple Filing Protocol (AFP) file sharing service in the MacOS Server is gone. The AFP file server capability was dropped in MacOS version 11, aka Big Sur back in December last year. The AFP client is the last remaining piece in MacOS and may see its days numbered as well as the world of file services evolved from the simple local networks and workgroup collaboration of the 80s and 90s, to something more complex and demanding. The AFP’s decline was also probably aided by the premium prices of Apple hardware, and many past users have switched to Windows for frugality and prudence reasons. SMB/CIFS is the network file sharing services for Windows, and AFP is not offered in Windows natively.

MacOS supports 3 of the file sharing protocols natively – AFP, NFS and SMB/CIFSas a client. Therefore, it has the capability to collaborate well in many media and content development environments, and sharing and exchanging files easily, assuming that the access control and permissions and files/folders ownerships are worked out properly. The large scale Apple-only network environment is no longer feasible and many studios that continue to use Macs for media and content development have only a handful of machines and users.

NAS vendors that continue to support AFP file server services are not that many too, or at least those who advertise their support for AFP. iXsystems™ TrueNAS® is one of the few. This blog shows the steps to setup the AFP file services for MacOS clients.

Continue reading

Fueling the Flywheel of AWS Storage

It was bound to happen. It happened. AWS Storage is the Number 1 Storage Company.

The tell tale signs were there when Silicon Angle reported that AWS Storage revenue was around USD$6.5-7.0 billion last year and will reach USD$10 billion at the end of 2021. That news was just a month ago. Last week, IT Brand Pulse went a step further declaring AWS Storage the Number 1 in terms of revenue. Both have the numbers to back it up.

AWS Logo

How did it become that way? How did AWS Storage became numero uno?

Flywheel juggernaut

I became interested in the Flywheel concept some years back. It was conceived in Jim Collins’ book, “Good to Great” almost 20 years ago, and since then, Amazon.com has become the real life enactment of the Flywheel concept.

Amazon.com Flywheel – How each turn becomes sturdier, brawnier.

Every turn of the flywheel requires the same amount of effort although in the beginning, the noticeable effect is minuscule. But as every turn gains momentum, the returns of each turn scales greater and greater to the fixed efforts of operating a single turn.

Continue reading