Crash consistent data recovery for ZFS volumes

While TrueNAS® CORE and TrueNAS® Enterprise are more well known for its NAS (network attached storage) prowess, many organizations are also confidently placing their enterprise applications such as hypervisors and databases on TrueNAS® via SANs (storage area networks) as well. Both iSCSI and Fibre Channel™ (selected TrueNAS® Enterprise storage models) protocols are supported well.

To reliably protect these block-based applications via the SAN protocols, ZFS snapshot is the key technology that can be dependent upon to restore the enterprise applications quickly. However, there are still some confusions when it comes to the state of recovery from the ZFS snapshots. On that matter, this situations are not unique to the ZFS environments because as with many other storage technologies, the confusion often stem from the (mis)understanding of the consistency state of the data in the backups and in the snapshots.

Crash Consistency vs Application Consistency

To dispel this misunderstanding, we must first begin with the understanding of a generic filesystem agnostic snapshot. It is a point-in-time copy, just like a data copy on the tape or in the disks or in the cloud backup. It is a complete image of the data and the state of the data at the storage layer at the time the storage snapshot was taken. This means that the data and metadata in this snapshot copy/version has a consistent state at that point in time. This state is frozen for this particular snapshot version, and therefore it is often labeled as “crash consistent“.

In the event of a subsystem (application, compute, storage, rack, site, etc) failure or a power loss, data recovery can be initiated using the last known “crash consistent” state, i.e. restoring from the last good backup or snapshot copy. Depending on applications, operating systems, hypervisors, filesystems and the subsystems (journals, transaction logs, protocol resiliency primitives etc) that are aligned with them, some workloads will just continue from where it stopped. It may already have some recovery mechanisms or these workloads can accept data loss without data corruption and inconsistencies.

Some applications, especially databases, are more sensitive to data and state consistencies. That is because of how these applications are designed. Take for instance, the Oracle® database. When an Oracle® database instance is online, there is an SGA (system global area) which handles all the running mechanics of the database. SGA exists in the memory of the compute along with transaction logs, tablespaces, and open files that represent the Oracle® database instance. From time to time, often measured in seconds, the state of the Oracle® instance and the data it is processing have to be synched to non-volatile, persistent storage. This commit is important to ensure the integrity of the data at all times.

Continue reading

How well do you know your data and the storage platform that processes the data

Last week was consumed by many conversations on this topic. I was quite jaded, really. Unfortunately many still take a very simplistic view of all the storage technology, or should I say over-marketing of the storage technology. So much so that the end users make incredible assumptions of the benefits of a storage array or software defined storage platform or even cloud storage. And too often caveats of turning on a feature and tuning a configuration to the max are discarded or neglected. Regards for good storage and data management best practices? What’s that?

I share some of my thoughts handling conversations like these and try to set the right expectations rather than overhype a feature or a function in the data storage services.

Complex data networks and the storage services that serve it

I/O Characteristics

Applications and workloads (A&W) read and write from the data storage services platforms. These could be local DAS (direct access storage), network storage arrays in SAN and NAS, and now objects, or from cloud storage services. Regardless of structured or unstructured data, different A&Ws have different behavioural I/O patterns in accessing data from storage. Therefore storage has to be configured at best to match these patterns, so that it can perform optimally for these A&Ws. Without going into deep details, here are a few to think about:

  • Random and Sequential patterns
  • Block sizes of these A&Ws ranging from typically 4K to 1024K.
  • Causal effects of synchronous and asynchronous I/Os to and from the storage

Continue reading

At the mercy of the cloud deity

Amazon Web Services (AWS) went down in the middle of last week. News of the outage were mentioned:

AWS Management Console unavailable error

Piling the misery

The AWS outage headlines attract the naysayers, the fickle armchair pundits, and the opportunists. Here are a few news articles that bring these folks to chastise the cloud giant.

Of course, I am one of these critics. I don’t deny that I am not. But I read this situation from a multicloud hyperbole of which I am not a fan. Too much multicloud whitewashing by vendors trying to pitch multicloud as a disaster recovery solution without understanding that this is easier said than done.

Continue reading

Storage Elephant Compute Birds

Data movement is expensive. Not just costs, but also latency and resources as well. Thus there were many narratives to move compute closer to where the data is stored because moving compute is definitely more economical than moving data. I borrowed the analogy of the 2 animals from some old NetApp® slides which depicted storage as the elephant, and compute as birds. It was the perfect analogy, because the storage is heavy and compute is light.

“Close up of a white Great Egret perching on top of an African Elephant aa Amboseli national park, Kenya”

Before the animals representation came about I used to use the term “Data locality, Data Mobility“, because of past work on storage technology in the Oil & Gas subsurface data management pipeline.

Take stock of your data movement

I had recent conversations with an end user who has been paying a lot of dollars keeping their “backup” and “archive” in AWS Glacier. The S3 storage is cheap enough to hold several petabytes of data for years, because the IT folks said that the data in AWS Glacier are for “backup” and “archive”. I put both words in quotes because they were termed as “backup” and “archive” because of their enterprise practice. However, the face of their business is changing. They are in manufacturing, oil and gas downstream, and the definitions of “backup” and “archive” data has changed.

For one, there is a strong demand for reusing the past data for various reasons and these datasets have to be recalled from their cloud storage. Secondly, their data movement activities still mimicked what they did in the past during their enterprise storage days. It was a classic lift-and-shift when they moved to the cloud, and not taking stock of  their data movements and the operations they ran on these datasets. Still ongoing, their monthly AWS cost a bomb.

Continue reading

What happened to NDMP?

The acronym NDMP shows up once in a while in NAS (Network Attached Storage) upgrade tenders. And for the less informed, NDMP (Network Data Management Protocol) was one of the early NAS data management (more like data mover specifications) initiatives to backup NAS devices, especially the NAS appliances that run proprietary operating systems code.

NDMP Logo

Backup software vendors often have agents developed specifically for an operating system or an operating environment. But back in the mid-1990s, 2000s, the internal file structures of these proprietary vendors were less exposed, making it harder for backup vendors to develop agents for them. Furthermore, there was a need to simplify the data movements of NAS files between backup servers and the NAS as a client, to the media servers and eventually to the tape or disk targets. The dominant network at the time ran at 100Mbits/sec.

To overcome this, Network Appliance® and PDC Solutions/Legato® developed the NDMP protocol, allowing proprietary NAS devices to run a standardized client-server architecture with the NDMP server daemon in the NAS and the backup service running as an NDMP client. Here is a simplified look at the NDMP architecture.

NDMP Client-Server Architecture

Continue reading

Don’t go to the Clouds. Come back!

Almost in tandem last week, Nutanix™ and HPE appeared to have made denigrated comments about Cloud First mandates of many organizations today. Nutanix™ took to the annual .NEXT conference to send the message that cloud is wasteful. HPE campaigned against a UK Public Sector “Cloud First” policy.

Cloud First or Cloud Not First

The anti-cloud first messaging sounded a bit funny and hypocritical when both companies have a foot in public clouds, advocating many of their customers in the clouds. So what gives?

That A16Z report

For a numbers of years, many fear criticizing the public cloud services openly. For me, there are the 3 C bombs in public clouds.

  • Costs
  • Complexity
  • Control (lack of it)

Yeah, we would hear of a few mini heart attacks here and there about clouds overcharging customers, and security fallouts. But vendors then who were looking up to the big 3 public clouds as deities, rarely chastise them for the errors. Until recently.

The Cost of Cloud, a Trillion Dollar Paradox” released by revered VC firm Andreessen Horowitz in May 2021 opened up the vocals of several vendors who are now emboldened to make stronger comments about the shortcomings of public cloud services. The report has made it evident that public cloud services are not panacea of all IT woes.

The report has made it evident that public cloud services are not panacea of all IT woes. And looking at the trends, this will only get louder.

Use ours first. We are better

It is pretty obvious that both Nutanix™ and HPE have bigger stakes outside the public cloud IaaS (infrastructure-as-a-service) offerings. It is also pretty obvious that both are not the biggest players in this cloud-first economy. Given their weights in the respective markets, they are leveraging their positions to swing the mindsets to their turf where they can win.

“Use our technology and services. We are better, even though we are also in the public clouds.”

Not a zero sum game

But IT services and IT technologies are not a zero sum game. Both on-premises IT services and complementary public cloud services can co-exist. Both can leverage on each other’s strengths and support each other’s weaknesses, if you know how to blend and assimilate the best of both worlds. Hybrid cloud is the new black.

Gartner Hype Cycle

The IT pendulum swings. Technology hype goes fever pitch. Everyone thinks there is a cure for cancer. Reality sets in. They realize that they were wrong (not completely) or right (not completely). Life goes on. The Gartner® Hype Cycle explains this very well.

The cloud is OK

There are many merits having IT services provisioned in the cloud. Agility, pay-per-use, OPEX, burst traffic, seemingly unlimited resources and so. You can read more about it at Benefits of Cloud Computing: The pros and cons. Even AWS agrees to Three things every business needs from hybrid cloud, perhaps to the chagrin of these naysayers.

I opined that there is no single solution for everything. There is no Best Storage Technology Ever (a snarky post). And so, I believe there is nothing wrong of Nutanix™ and HPE, and maybe others, being hypocritical of their cloud and non-cloud technology offerings. These companies are adjusting and adapting to the changing landscapes of the IT environments, but it is best not to confuse the customers what tactics, strategy and vision are. Inconsistencies in messaging diminishes trust.

 

 

The future of Fibre Channel in the Cloud Era

The world has pretty much settled that hybrid cloud is the way to go for IT infrastructure services today. Straddled between the enterprise data center and the infrastructure-as-a-service in public cloud offerings, hybrid clouds define the storage ecosystems and architecture of choice.

A recent Blocks & Files article, “Broadcom server-storage connectivity sales down but recovery coming” caught my attention. One segment mentioned that the server-storage connectivity sales was down 9% leading me to think “Is this a blip or is it a signal that Fibre Channel, the venerable SAN (storage area network) protocol is on the wane?

Fibre Channel Sign

Thus, I am pondering the position of Fibre Channel SANs in the cloud era. Where does it stand now and in the near future? Continue reading

What If – The other side of Storage FUDs

Streaming on Disney+ now is Marvel Studios’ What If…? animated TV series. In the first episode, Peggy Carter, instead of Steve Rogers, took the super soldier serum and became the first Avenger. The TV series explores alternatives and possibilities of what we may have considered as precept and the order of things.

As storage practitioners, we are often faced with certain “dogmatic” arguments which were often a mix of measured actuality and marketing magic – aka FUD (fear, uncertainty, doubt). Time and again, we are thrown a curve ball, like “Oh, your competitor can do this. Can you?” Suddenly you are feeling pinned to a corner, and the pressure to defend your turf rises. You fumbled; You have no answer; Game over!

I experienced these hearty objections many times over. The best experience was one particular meeting I had during my early days with NetApp® in 2000. I was only 1-2 months with the company, still wet between the ears with the technology. I was pitching the SnapMirror® to Ericsson Malaysia when the Scandinavian manager said, “I think you are lying!“. I was lost without a response. I fumbled spectacularly although I couldn’t remember if we won or lost that opportunity.

Here are a few I often encountered. Let’s play the game of What If …?

What If …?

Continue reading

SSOT of Files

[ This is part two of “Where are your files living now?”. You can read Part One here ]

Data locality, Data mobility“. It was a term I like to use a lot when describing about data consolidation, leading to my mention about files and folders, and where they live in my previous blog. The thinking of where the files and folders are now as in everywhere as they can be in a plethora of premises stretches the premise of SSOT (Single Source of Truth). And this expatriation of files with minimal checks and balances disturbs me.

A year ago, just before I joined iXsystems, I was given Google® embargoed news, probably a week before they announced BigQuery Omni. Then I was interviewed by Enterprise IT News, a local Malaysian technology news portal to provide an opinion quote. This was what I quoted:

“’The data warehouse in the cloud’ managed services of Big Query is underpinned by Google® Anthos, its hybrid cloud infra and service management platform based on GKE (Google® Kubernetes Engine). The containerised applications, both on-prem and in the multi-clouds, would allow Anthos to secure and orchestrate infra, services and policy management under one roof.”

I further quoted ” The data repositories remain in each cloud is good to address data sovereignty, data security concerns but it did not mention how it addresses “single source of truth” across multi-clouds.

Single Source of Truth – regardless of repositories

Continue reading

Where are your files living now?

[ This is Part One of a longer conversation ]

EMC2 (before the Dell® acquisition) in the 2000s had a tagline called “Where Information Lives™**. This was before the time of cloud storage. The tagline was an adage of enterprise data storage, proper and contemporaneous to the persistent narrative at the time – Data Consolidation. Within the data consolidation stories, thousands of files and folders moved about the networks of the organizations, from servers to clients, clients to servers. NAS (Network Attached Storage) was, and still is the work horse of many, many organizations.

[ **Side story ] There was an internal anti-EMC joke within NetApp® called “Information has a new address”.

EMC tagline “Where Information Lives”

This was a time where there were almost no concerns about Shadow IT; ransomware were less known; and most importantly, almost everyone knew where their files and folders were, more or less (except in Oil & Gas upstream – to be told in later in this blog). That was because there were concerted attempts to consolidate data, and inadvertently files and folders, in the organization.

Even when these organizations were spread across the world, there were distributed file technologies at the time that could deliver files and folders in an acceptable manner. Definitely not as good as what we have today in a cloudy world, but acceptable. I personally worked a project setting up Andrew File Systems for Intel® in Penang in the mid-90s, almost joined Tacit Networks in the mid-2000s, dabbled on Microsoft® Distributed File System with NetApp® and Windows File Servers while fixing the mountains of issues in deploying the worldwide GUSto (Global Unified Storage) Project in Shell 2006. Somewhere in my chronological listings, Acopia Networks (acquired by F5) and of course, EMC2 Rainfinity and NetApp® NuView OEM, Virtual File Manager.

The point I am trying to make here is most IT organizations had a good grip of where the files and folders were. I do not think this is very true anymore. Do you know where your files and folders are living today? 

Continue reading