Nakivo Backup Replication architecture and installation on TrueNAS – Part 1

Backup and Replication software have received strong mandates in organizations with enterprise mindsets and vision. But lower down the rung, small medium organizations are less invested in backup and replication software. These organizations know full well that they must backup, replicate and protect their servers, physical and virtual, and also new workloads in the clouds, given the threat of security breaches and ransomware is looming larger and larger all the time. But many are often put off by the cost of implementing and deploying a Backup and Replication software.

So I explored one of the lesser known backup and recovery software called Nakivo® Backup and Replication (NBR) and took the opportunity to build a backup and replication appliance in my homelab with TrueNAS®. My objective was to create a cost effective option for small medium organizations to enjoy enterprise-grade protection and recovery without the hefty price tag.

This blog, Part 1, writes about the architecture overview of Nakivo® and the installation of the NBR software in TrueNAS® to bake in and create the concept of a backup and replication appliance. Part 2, in a future blog post, will cover the administrative and operations usage of NBR.

Continue reading

First looks into Interplanetary File System

The cryptocurrency craze has elevated another strong candidate in recent months. Filecoin, is leading the voice of a decentralized Internet, the next generation Web 3.0. In this blog, I am not going to write much about the Filecoin frenzy but the underlying distributed file system that powers this phenomenon – The Interplanetary File System.

[ Note: This is still a very new area for me, and the rest of the content of this blog is still nascent and developing ]

Interplanetary File System

Tremulous Client-Server web architecture

The entire Internet architecture is almost client and server. Your clients like browsers, apps, connect to Web services served from a collection of servers. As Web 3.0 approaches (some say it is already here), the client-server model is no longer perceived as the Internet architecture of choice. Billions, and billions of users, applications, devices relying solely on a centralized service would lead to many impactful consequences, and the reasons for decentralization, away from the client-server architecture models of the Internet are cogent.

Continue reading

Is Software Defined right for Storage?

George Herbert Leigh Mallory, mountaineer extraordinaire, was once asked “Why did you want to climb Mount Everest?“, in which he replied “Because it’s there“. That retort demonstrated the indomitable human spirit and probably exemplified best the relationship between the human being’s desire to conquer the physical limits of nature. The software of humanity versus the hardware of the planet Earth.

Juxtaposing, similarities can be said between software and hardware in computer systems, in storage technology per se. In it, there are a few schools of thoughts when it comes to delivering storage services with the notable ones being the storage appliance model and the software-defined storage model.

There are arguments, of course. Some are genuinely partisan but many a times, these arguments come in the form of the flavour of the moment. I have experienced in my past companies touting the storage appliance model very strongly in the beginning, and only to be switching to a “software company” chorus years after that. That was what I meant about the “flavour of the moment”.

Software Defined Storage

Continue reading

A FreeNAS Compression Tale

David vs Goliath Credit: Miguel Robledo of https://www.artstation.com/miguel_robledo

David vs Goliath

It was an underdog tale worthy of the biblical book of Samuel. When I first caught wind of how FreeNAS™ compression prowess was going against NetApp® compression and deduplication in one use case, I had to find out more. And the results in this use case was quite impressive considering that FreeNAS™ (now known as TrueNAS® CORE) is the free, open source storage operating system and NetApp® Data ONTAP, is the industry leading, enterprise, “king of the hill” storage data management software.

Certainly a David vs Goliath story.

Compression in FreeNAS

Ah, Compression! That technology that is often hidden, hardly seen and often forgotten.

Compression is a feature within FreeNAS™ that seldom gets the attention. It works, and certainly is a mature form of data footprint reduction (DFR) technology, along with data deduplication. It is switched on by default, and is the setting when creating a dataset, as shown below:

Dataset creation with Compression (lz4) turned on

The default compression algorithm is lz4 which is fast but poor in compression ratio compared to gzip and bzip2. However, lz4 uses less CPU cycles to perform its compression and decompression processing, and thus the impact on FreeNAS™ and TrueNAS® is very low.

NetApp® ONTAP, if I am not wrong, uses lzopro as default – a commercial and optimized version of the open source LZO compression library. In addition, NetApp also has their data deduplication technology as well, something OpenZFS has to improve upon in the future.

The DFR report

This brings us to the use case at one of iXsystems™ customers in Taiwan. The data to be reduced are mostly log files at the end user, and the version of FreeNAS™ is 11.2u7. There are, of course, many factors that affect the data reduction ratio, but in this case of 4 scenarios,  the end user has been running this in production for over 2 months. The results:

FreeNAS vs NetApp Data Footprint Reduction

In 2 of the 4 scenarios, FreeNAS™ performed admirably with just the default lz4 compression alone, compared to NetApp® which was running both their inline compression and deduplication.

The intention to post this report is not to show that FreeNAS™ is better in every case. It won’t be, and there are superior data footprint reduction tech out there which can outperform it. But I would expect potential and existing end users to leverage on the compression capability of FreeNAS™ which is getting better all the time.

A better compression algorithm

Followers of OpenZFS are aware of the changing of times with OpenZFS version 2.0. One exciting update is the introduction of the zstd compression algorithm into OpenZFS late last year, and is already in TrueNAS® CORE and Enterprise version 12.x.

What is zstd? zstd is a fast compression algorithm that aims to be as efficient (or better) than gzip, but with better speed closer to lz4, relatively. For a long time, the gzip compression algorithm, from levels 1-9, has been serving very good compression ratio compared to many compression algorithms, lz4 included.

However, the efficiency came at a higher processing price and thus took a longer time. At the other end, lz4 is fast and lightweight, but its reduction ratio efficiency is very poor. zstd intends to be the in-between of gzip and lz4. In the latest results published by Facebook’s github page,

zstd performance benchmark against other compression algorithms

For comparison, zstd (level -1) performed very well against zlib, the data compression library in gzip. It was made known there are 22 levels of compression in zstd but I do not know how many levels are accepted in the OpenZFS development.

At the same time, compression takes advantage of multi-core processing, and actually can speed up disk I/O response because the original dataset to be processed is smaller after the compression reduction.

While TrueNAS® still defaults lz4 compression as of now, you can probably change the default compression with a command

# zfs set compression=zstd-6 pool/dataset

Your choice

TrueNAS® and FreeNAS™ support multiple compression algorithms. lz4, gzip and now zstd. That gives the administrator a choice to assign the right compression algorithm based on processing power, storage savings, and time to get the best out of the data stored in the datasets.

As far as the David vs Goliath tale goes, this real life use case was indeed a good one to share.

 

Discovering OpenZFS Fusion Pool

Fusion Pool excites me, but unfortunately this new key feature of OpenZFS is hardly talked about. I would like to introduce the Fusion Pool feature as iXsystems™ expands the TrueNAS® Enterprise storage conversations.

I would not say that this technology is revolutionary. Other vendors already have the similar concept of Fusion Pool. The most notable (to me) is NetApp® Flash Pool, and I am sure other enterprise storage vendors have the same. But this is a big deal (for me) for an open source file system in OpenZFS.

What is Fusion Pool  (aka ZFS Allocation Classes)?

To understand Fusion Pool, we have to understand the basics of the ZFS zpool. A zpool is the aggregation (borrowing the NetApp® terminology) of vdevs (virtual devices), and vdevs are a collection of physical drives configured with the OpenZFS RAID levels (RAID-0, RAID-1, RAID-Z1, RAID-Z2, RAID-Z3 and a few nested RAID permutations). A zpool can start with one vdev, and new vdevs can be added on-the-fly, expanding the capacity of the zpool online.

There are several types of vdevs prior to Fusion Pool, and this is as of pre-TrueNAS® version 12.0. As shown below, these are the types of vdevs available to the zpool at present.

OpenZFS zpool and vdev types – Credit: Jim Salter and Arstechnica

Fusion Pool is a zpool that integrates with a new, special type of vdev, alongside other normal vdevs. This special vdev is designed to work with small data blocks between 4-16K, and is highly efficient in handling random reading and writing of these small blocks. This bodes well with the OpenZFS file system metadata blocks and other blocks of small files. And the random nature of the Read/Write I/Os works best with SSDs (can be read or write intensive SSDs).

Continue reading

OpenZFS 2.0 exciting new future

The OpenZFS (virtual) Developer Summit ended over a weekend ago. I stayed up a bit (not much) to listen to some of the talks because it started midnight my time, and ran till 5am on the first day, and 2am on the second day. Like a giddy schoolboy, I was excited, not because I am working for iXsystems™ now, but I have been a fan and a follower of the ZFS file system for a long time.

History wise, ZFS was conceived at Sun Microsystems in 2005. I started working on ZFS reselling Nexenta in 2009 (my first venture into business with my company nextIQ) after I was professionally released by EMC early that year. I bought a Sun X4150 from one of Sun’s distributors, and started creating a lab server. I didn’t like the workings of NexentaStor (and NexentaCore) very much, and it was priced at 8TB per increment. Later, I started my second company with a partner and it was him who showed me the elegance and beauty of ZFS through the command lines. The creed of ZFS as a volume and a file system at the same time with the CLI had an effect on me. I was in love.

OpenZFS Developer Summit 2020 Logo

OpenZFS Developer Summit 2020 Logo

Exciting developments

Among the many talks shared in the OpenZFS Developer Summit 2020 , there were a few ideas and developments which were exciting to me. Here are 3 which I liked and I provide some commentary about them.

  • Block Reference Table
  • dRAID (declustered RAID)
  • Persistent L2ARC

Continue reading

Falconstor Software Defined Data Preservation for the Next Generation

Falconstor® Software is gaining momentum. Given its arduous climb back to the fore, it is beginning to soar again.

Tape technology and Digital Data Preservation

I mentioned that long term digital data preservation is a segment within the data lifecycle which has merits and prominence. SNIA® has proved that this is a strong growing market segment through its 2007 and 2017 “100 Year Archive” surveys, respectively. 3 critical challenges of this long, long-term digital data preservation is to keep the archives

  • Accessible
  • Undamaged
  • Usable

For the longest time, tape technology has been the king of the hill for digital data preservation. The technology is cheap, mature, and many enterprises has built their long term strategy around it. And the pulse in the tape technology market is still very healthy.

The challenges of tape remain. Every 5 years or so, companies have to consider moving the data on the existing tape technology to the next generation. It is widely known that LTO can read tapes of the previous 2 generations, and write to it a generation before. The tape transcription process of migrating digital data for the sake of data preservation is bad because it affects the structural integrity and quality of the content of the data.

In my times covering the Oil & Gas subsurface data management, I have seen NOCs (national oil companies) with 500,000 tapes of all generations, from 1/2″ to DDS, DAT to SDLT, 3590 to LTO 1-7. And millions are spent to transcribe these tapes every few years and we have folks like Katalyst DM, Troika and more hovering this landscape for their fill.

Continue reading

Dell EMC Isilon is an Emmy winner!

[ Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies presented at this event. The content of this blog is of my own opinions and views ]

And the Emmy® goes to …

Yes, the Emmy® goes to Dell EMC Isilon! It was indeed a well deserved accolade and an honour!

Dell EMC Isilon had just won the Technology & Engineering Emmy® Awards a week before Storage Field Day 19, for their outstanding pioneering work on the NAS platform tiering technology of media and broadcasting content according to business value.

A lasting true clustered NAS

This is not a blog to praise Isilon but one that instill respect to a real true clustered, scale-out file system. I have known of OneFS for a long time, but never really took the opportunity to really put my hands on it since 2006 (there is a story). So here is a look at history …

Back in early to mid-2000, there was a lot of talks about large scale NAS. There were several players in the nascent scaling NAS market. NetApp was the filer king, with several competitors such as Polyserve, Ibrix, Spinnaker, Panasas and the young upstart Isilon. There were also Procom, BlueArc and NetApp’s predecessor Auspex. By the second half of the 2000 decade, the market consolidated and most of these NAS players were acquired.

Continue reading