Fusion Pool excites me, but unfortunately this new key feature of OpenZFS is hardly talked about. I would like to introduce the Fusion Pool feature as iXsystems™ expands the TrueNAS® Enterprise storage conversations.
I would not say that this technology is revolutionary. Other vendors already have the similar concept of Fusion Pool. The most notable (to me) is NetApp® Flash Pool, and I am sure other enterprise storage vendors have the same. But this is a big deal (for me) for an open source file system in OpenZFS.
What is Fusion Pool (aka ZFS Allocation Classes)?
To understand Fusion Pool, we have to understand the basics of the ZFS zpool. A zpool is the aggregation (borrowing the NetApp® terminology) of vdevs (virtual devices), and vdevs are a collection of physical drives configured with the OpenZFS RAID levels (RAID-0, RAID-1, RAID-Z1, RAID-Z2, RAID-Z3 and a few nested RAID permutations). A zpool can start with one vdev, and new vdevs can be added on-the-fly, expanding the capacity of the zpool online.
There are several types of vdevs prior to Fusion Pool, and this is as of pre-TrueNAS® version 12.0. As shown below, these are the types of vdevs available to the zpool at present.
Fusion Pool is a zpool that integrates with a new, special type of vdev, alongside other normal vdevs. This special vdev is designed to work with small data blocks between 4-16K, and is highly efficient in handling random reading and writing of these small blocks. This bodes well with the OpenZFS file system metadata blocks and other blocks of small files. And the random nature of the Read/Write I/Os works best with SSDs (can be read or write intensive SSDs).
Prior to Fusion Pool, all data blocks in the OpenZFS file systems, big (32K to 1MB) and small (4-16K) are basically shoved into the normal vdev constituents of the zpool. And this has performance impact of the zpool. So, Fusion Pool is a godsend.
The only caveat is this special vdev must be configured as a RAID-1 mirror, either 2-way or 3-way.
Hurray for OpenZFS Deduplication
Data deduplication in OpenZFS has always been a bugbear for me. The Dedupe Table (DDT) occupies 320 bytes per entry and can fill up the RAM rather significantly in the past OpenZFS implementations. When DDT became too big, part or the entirety of the DDT has to be evicted from the RAM, resulting in poor dedupe performance. This meant we have to size up the RAM footprint quite a bit to support dedupe well.
With the introduction of the Fusion Pool, the small metadata blocks of the DDT can now fit conducively with this special vdev. Again this is a win for OpenZFS zpools with enhanced I/O performance for both large and small blocks. My fingers crossed to see the day where we can announce a 3:1 or 5:1 deduplication ratio from OpenZFS.
With little fanfare, OpenZFS 2.0 and its recent developments have been strong in addressing performance. While I wrote about the very strong data integrity and self-healing feature in OpenZFS before, the performance part has been getting the attention as well. Here are the key ones focused on performance (click on the links for Youtube videos):
I have commented on OpenZFS exciting new future. You read my blog in the link below.
As an open source project, it is admirable that OpenZFS new developments are aimed to the business and operational needs of the enterprise. And it has been more exciting than ever before, seeing the rise and rise of OpenZFS.