I built a 6-node Gluster cluster with TrueNAS SCALE

I haven’t had hands-on with Gluster for over a decade. My last blog about Gluster was in 2011, right after I did a proof-of-concept for the now defunct, Jaring, Malaysia’s first ISP (Internet Service Provider). But I followed Gluster’s development on and off, until I found out that Gluster was a feature in then upcoming TrueNAS® SCALE. That was almost 2 years ago, just before I accepted to offer to join iXsystems™, my present employer.

The eagerness to test drive Gluster (again) on TrueNAS® SCALE has always been there but I waited for SCALE to become GA. GA finally came on February 22, 2022. My plans for the test rig was laid out, and in the past few weeks, I have been diligently re-learning and putting up the scope to built a 6-node Gluster clustered storage with TrueNAS® SCALE VMs on Virtualbox®.

Gluster on OpenZFS with TrueNAS SCALE

Before we continue, I must warn that this is not pretty. I have limited computing resources in my homelab, but Gluster worked beautifully once I ironed out the inefficiencies. Secondly, this is not a performance test as well, for obvious reasons. So, this is the annals along with the trials and tribulations of my 6-node Gluster cluster test rig on TrueNAS® SCALE.

What is Gluster?

Gluster is a highly scalable, open source distributed filesystem. It allows hundred of clients to perform high performance I/O access to the Gluster cluster of storage volumes assembled from “bricks” through a single namespace. It runs on COTS (common of the shelf) servers and TrueNAS® SCALE, and can be easily scaled to petabytes with its scale-out architecture.

There are several ways to access the Gluster clustered nodes. It has the POSIX-compliant GlusterFS client, or NFS as well as SMB clients via TCP or the high throughput RDMA (remote direct memory access) transport.

Why 6-nodes? Explaining the Gluster volume types

Gluster (note that I am using Gluster and GlusterFS interchangeably) has several volume types. The aggregated volumes from the Gluster bricks are exported to connected network clients. Many volumes can co-exist in the cluster. Once each volume is mounted, a client can access many storage volumes and many clients can access a storage volume in tandem, depending on configurations. The integration between the Gluster clients and Gluster cluster in the single namespace is incredibly simple and flexible.

I wanted to test the most capable volumes/bricks combinations without haemorrhaging my test environment and hence, 6 nodes was the maximum I can conjure without breaking the bank.

GlusterFS  supports several volume types. They are:

  • Distributed volumes (Conceptually like RAID-0)
  • Replicated volumes (Conceptually like RAID-1)
  • Dispersed volumes, previously known as striped volumes (Conceptually like parity-based RAID, but with erasure coding)
  • Distributed Replicated volumes
  • Distributed Dispersed volumes

With 6-nodes, I was able to configure 2 types of volumes – Distributed Replicated 2×3 (2 Replicated Volumes with 3 bricks each) and Distributed Dispersed 2, 2+1 (2 distributed volumes of 3 bricks of 1 redundancy each) – for testing.

Note: These combinations of distributed, replicated and dispersed can be very confusing. More information about the Gluster Architecture and volume types are found here. Or you can reach out to me and I can give you a whiteboard session as well.

Networking of the Gluster cluster

The first thing is to get all 6 nodes to communicate with each other. I set up each node with the fixed IP address and the respective hostname  using the 1-9 menu of TrueNAS® SCALE console, and occasionally with the Linux shell (TrueNAS® SCALE is based on Debian 11)

To verify the hostname for each node,


hostnamectl output

The absence of a DNS server means that I have to create a /etc/hosts file consisting of all 6 IP addresses and their corresponding hostnames. With the help of some kind folks in the TrueNAS® SCALE community, I follow their advice to configure Global Configuration via the TrueNAS® SCALE webGUI Network > Global Configuration > Host name database

Hostname database

Once configured, verify that all the node entries are in the /etc/hosts file and all the nodes can ping to each other.

/etc/hosts file for each node

Gluster cluster management services

The Gluster cluster is managed by the glusterd daemon on each node. Start the glusterd service on each node and enable it to make it persistent across reboots.

systemctl start glusterd

systemctl enable glusterd

systemctl status glusterd

glusterd service

Creating the cluster

The cluster is created with gluster peer probe command. This command can be run from any node and is only required to run once on one node.

[ Node g1 ] # gluster peer probe g2

Repeat for node g3, g4, g5 and g6. To verify that the cluster has all the 6 nodes running, run gluster peer status.

gluster peer status with 6 nodes running.

Creating the ZFS datasets

ZFS datasets have to be created on each one of the nodes. These eventually become bricks to assemble the volumes I mentioned earlier.

I have created 2 datasets on each one of the nodes. I have

  • /mnt/pool0/gvol1  (to be used as distributed replicated 2 x 3 volume)
  • /mnt/pool0/gvol2  (to be used a distributed dispersed 2, 2+1 volume)

So the bricks that are ready to be used are

  • g1:/mnt/pool0/gvol1
  • g2:/mnt/pool0/gvol1
  • g3:/mnt/pool0/gvol1
  • g4:/mnt/pool0/gvol1
  • g5:/mnt/pool0/gvol1
  • g6:/mnt/pool0/gvol1

And …

  • g1:/mnt/pool0/gvol2
  • g2:/mnt/pool0/gvol2
  • g3:/mnt/pool0/gvol2
  • g4:/mnt/pool0/gvol2
  • g5:/mnt/pool0/gvol2
  • g6:/mnt/pool0/gvol2

Creating the gluster volumes

  1. For the Distributed Replicated Volume 2 x 3, using gvol1 bricks

gluster volume create vol01 replica 3 transport tcp g1:/mnt/pool0/gvol1  g2:/mnt/pool0/gvol1 g3:/mnt/pool0/gvol1  g4:/mnt/pool0/gvol1  g5:/mnt/pool0/gvol1  g6:/mnt/pool0/gvol1

where vol01 is the name of the Gluster volume created and replica 3 will create a replica of 3 bricks. Because there are 6 bricks, the command will split it into 2 replicated volumes of 3 bricks each, known as 2 x 3.

# gluster volume start vol01

# gluster volume info vol01

  1. gluster volume start to initiate the volume

    2. For the Distributed Dispersed Volume 2, 2+1, using gvol2 bricks

# gluster volume create vol02 dispersed 3 redundancy 1 g1:/mnt/pool0/gvol2  g2:/mnt/pool0/gvol2 g3:/mnt/pool0/gvol2  g4:/mnt/pool0/gvol2  g5:/mnt/pool0/gvol2  g6:/mnt/pool0/gvol2

creates vol02 using dispersed 3 bricks with 1 brick as redundancy. With 6 bricks, the command creates 2 distributed volumes of 2+1 each, known as 2, 2+1.

# gluster volume start vol02

# gluster volume info vol01

gluster volume info of vol02

At this point, the Gluster volumes vol01 (Distributed Replicated) and vol02 (Distributed Dispersed) are ready for access from the clients.

Gluster client setup

My Gluster client is on Ubuntu® 21.10 Impish Indri, but should work fine from Ubuntu® version 18 or higher. It may require PPA (Personal Package Archive) Gluster repository before installling Gluster client.

# add-apt-repository ppa:gluster/glusterfs-10

# apt install glusterfs-client

Mount the Gluster volumes exported from TrueNAS® SCALE to mount points created on the Ubuntu® client.

# mount -t glusterfs g1:/vol01 /gluster_mountpoint-1

# mount -t glusterfs g5:/vol02 /gluster_mountpoint-2

Both commands connected to the Gluster volumes exported from the 6-node Gluster cluster via different entry points and translators.

To make these mount points persistent, edit the /etc/fstab file in the Gluster client with these entries below:

g1:/vol01   /gluster_mountpoint-1  glusterfs  defaults,_netdev  0  0

g5:/vol02  /gluster_mountpoint-2  glusterfs defaults,_netdev  0  0

Thank you to all

This was a fun project. I am glad I took the time to learn about Gluster again, but more importantly, I am glad to share what I have learned. There is still a lot of other things to learn about Gluster and I can’t say I could learn them all. But I truly enjoyed this experiment of mine.

I stand on the shoulder of giants. Many have contributed and shared generously for neophytes like me to learn. Thank you to all.

Tagged , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

10 Responses to I built a 6-node Gluster cluster with TrueNAS SCALE

  1. Kjeld Schouten-Lebbing says:

    You’re aware this is not the supported way to use gluster on SCALE right?

    The only officially supported way is using TrueCommand.

    • cfheoh says:

      I know. I work for iX

      • Kjeld Schouten-lebbing says:

        Cool 🙂
        To be fair: I did test manual gluster setups a year-or-so ago and it even showed up in TrueCommand (which I did not expect).

        So it’s pretty resiliant and should work ™ just fine 🙂

        • cfheoh says:

          More things coming from TrueNAS SCALE (Bluefin release) at the end of 2022, and more cool things from TrueCommand as well.

          • Tyler Rieves says:

            Random question that I can’t seem to find any solid info on: when clustering TrueNAS SCALE nodes, I know the nodes have to have two interfaces – one for client access and one for cluster data. Should the TrueCommand server be on the cluster vlan? It makes sense to me because it’s for managing the cluster and not for client access.

          • cfheoh says:

            Yes,TrueCommand is developed to manage the cluster. Check out this blog https://www.truenas.com/blog/truenas-scale-clustering/

  2. George Benson says:

    It seems that the Cobia release (I’m running 23.10.2) has removed some support for Gluster or the interface has changed.The glusterd daemon is present and can be enabled. The gluster command is not present. Has Gluster support without TrueCommand been deprecated? Is there an updated installation procedure?

    • cfheoh says:

      Hello George

      Pleasure to make your acquaintance. There are some changes coming, and I cannot reveal them at the moment given my status as an employee of iX.

      My suggestion is to join the new TrueNAS Forums and get some indicative notes there. And if you are an existing customer with a support contract, you can open a S4 case and get some answers too. 😉

      Thank you for reading my blog and all the best.

  3. Pingback: TrueNAS CORE versus TrueNAS SCALE |

    • cfheoh says:

      Thanks for putting my blog entry in your post.

      The unfortunate thing is that GlusterFS is going away, EOL December 2023 and gone by December 2024. You can read about it here https://www.reddit.com/r/redhat/comments/u8jty9/gluster_storage_eol_now_what/

      The way forward for TrueNAS SCALE for the scale-out file services (the scale-out object storage service part is handle by the MinIO erasure sets) is still under wraps, and will be known soon. Personally, I am disappointed with the GlusterFS’s future in TrueNAS SCALE, and I am also disappointed with the “maintenance engineering” phase of TrueNAS CORE and its enterprise brethen, TrueNAS Enterprise (different from TrueNAS SCALE Enterprise).

      I am leaving iX in a few weeks’ time. Destination unknown. Till then, all the best to you.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.