Deploying a MinIO SNMD Object Storage Server in TrueNAS SCALE

[ Preamble ] This deployment of MinIO SNMD (single node multi drive) object storage server on TrueNAS® SCALE 24.04 (codename “Dragonfish”) is experimental. I am just deploying this in my home lab for the fun of it. Do not deploy in any production environment.

I have been contemplating this for quite a while. Which MinIO deployment mode on TrueNAS® SCALE should I work on? For one, there are 3 modes – Standalone, SNMD (Single Node Multi Drives) and MNMD (Multi Node Multi Drives). Of course, the ideal lab experiment is MNMD deployment, the MinIO cluster, and I am still experimenting this on my meagre lab resources.

In the end, I decided to implement SNMD since this is, most likely, deployed on top of a TrueNAS® SCALE storage appliance instead an x-86 bare-metal or in a Kubernetes cluster on Linux systems. Incidentally, the concept of MNMD on top of TrueNAS® SCALE is “Kubernetes cluster”-like albeit a different container platform. At the same time, if this is deployed in a TrueNAS® SCALE Enterprise, a dual-controller TrueNAS® storage appliance, it will take care of the “MinIO nodes” availability in its active-passive HA architecture of the appliance. Otherwise, it can be a full MinIO cluster spread and distributed across several TrueNAS storage appliances (minimum 4 nodes in a 2+2 erasure set) in an MNMD deployment scheme.

Ideally, the MNMD deployment should look like this:

MinIO distributed multi-node cluster architecture (credit: MinIO)

Another deployment consideration is whether to use the MinIO of the charts and community train or the MinIO of the enterprise train. If you are not familiar with the TrueNAS trains, check it out here.

Anyway, here goes the deployment of MinIO Enterprise SNMD on TrueNAS® SCALE

Preparing to install MinIO enterprise train of TrueNAS® Catalog

Note: Please note that this MinIO “enterprise train” on TrueNAS® SCALE is NOT the same as the newly released MinIO Enterprise Object Storage.

First of all, make sure the enterprise is selected because by default, the TrueNAS® Catalog choose the charts and the community trains only. From the Apps in the left navigation column, select Discover and choose Manage Catalogs as shown below:

Manage Catalog

Edit Catalog

In the next page, select Edit.

Select enterprise train

Select enterprise train and Save. When you choose Discover Apps again, you will see 2 container images of MinIO – charts and enterprise.

MinIO trains in Catalog

Choose the enterprise to begin installation.

MinIO pre-installation configurations

If this is a new, clean install, select the zpool to install the MinIO container (yes, it is a container in TrueNAS® SCALE).

MinIO storage pool

There is a bunch of important details you have to input to have MinIO SNMD deployed. Take note of the guide on the right side of the screen of where required details are needed as shown below.

MinIO installation credentials requirements

Provide details for MinIO credentials. Root user is the Access Key and Root Password is the Secret Key. Both details correspond to the MinIO environment variables, MINIO_ROOT_USER and MINIO_ROOT_PASSWORD respectively.

The User and Group default to the minio user and group ID of 568. There is an internal user and an internal group already created with this 568 ID.

MinIO User and Group

However, do take note that if you are going to use your own created dataset storage (Host Path in the Storage Configuration section later), you will have to provide a different user ID and group ID because the ownership of the admin-created dataset will be different.

In the Network Configuration section, a few things to take note. The MinIO enterprise train uses ports 30000 and 30001 as the API port and the Web port respectively, unlike in the charts train configuration, it uses ports 9000 as the API port and 9002 as the console port.

MinIO Network Configuration

TrueNAS® recommends to uncheck the Host Network but the documentation does not state why. This part still is vague to me but for now, it has little difference in this SNMD deployment. There is no Certificate chosen. It requires a TLS-certificate and I am in the midst of testing this for my MNMD deployment later.

In the MinIO Server URL, input http://<IP address>:30000 and in MinIO Browser Redirect URL, put http://<IP address>:30001 as shown above.

The Storage Configuration section requires a bit of care, depending on the choices made. There are 2 types – Host Path and ixVolume, as shown below.

MinIO Storage Configuration – Storage Type

Host Path (Path that already exists on the system) is pre-created by the storage administrator in the zpool as a dataset. One dataset equals one Drive as a general rule of thumb. Take note that when creating these datasets, the owner of this new datasets must be to a valid user and group. I have created myminio user and group with UID and GID of 3000, and I assign them to be the owner of the /mnt/pool0/minio/data{1…4} datasets. Here is mine.

/mnt/pool0/minio/data{1…4} datasets

Otherwise, the MinIO installation process will not go through successfully and gets stuck with a permission problem.

If Host Path is chosen, the following configuration steps are presented below. Repeat for every dataset with every mount path by choosing the Add button at the top of that section.

MinIO Host Path mounting

The Mount Path corresponding to a directory called /data1 created in the container in the MinIO node and that joins to the /mnt/pool0/minio/data1 dataset. It is important to remember this expansion notation data{1…4} since this experiment only uses 4 “drives” or datasets that correspond to 4 mount paths in the single MinIO container. We will come back to this part when it comes to MultiMode Configuration later.

Otherwise, if the ixVolume (Dataset created automatically by the system) is selected, the action is obvious. TrueNAS® creates the required datasets automatically in the background and links these datasets to the default MinIO owner UID and GID 568 respectively.

MinIO ixVolume configuration and mount paths

The most important part of the SNMD deployment configuration is probably the MultiMode Configuration. This sets the creation of both the SNMD or the MNMD deployments. For SNMD, input the following – http://<IP address>:30000/data{1…4} <== this is the expansion notation nomenclature I mentioned earlier.

MinIO MultiMode Configuration

The last parts are the MinIO Logging and the Resources Configuration sections. You can choose to Enable LogSearch, which will create 2 more datasets called postgres-data and postgres-backup. This is the MinIO container audit logs and logging. See below for the configuration.

MinIO Logging configuration

Lastly, just accept the default values in the Resources Configuration section. It just limits the resources consumption of the MinIO container to 4 cores and 8GiB of memory.

MinIO Resources Configuration

Click Install at the bottom to begin the container installation and deployment.

It’s alive and it’s running!

MinIO container deploying

A short while later …

MinIO running

The test in the pudding

Go to http://<IP address>:30000 in the browser. The MinIO login page is presented. Login in with the Root User and Root Password configured earlier.

MinIO browser login

MinIO create bucket

Everything looks good. The MinIO SNMD deployment on a single TrueNAS® SCALE system is a success. That is the pudding test we want.

Caveats

Despite this successful deployment experiment in MinIO, there are still many questions on my mind. First of all, MinIO does not recommend the deployment of the cluster over ZFS, as described in their documentation. TrueNAS®, both SCALE and CORE, use OpenZFS filesystem. This is a question to be answered by both vendors.

This part also leads to the support scope and levels provided by both iXsystems™ and MinIO respectively. I no longer work for iXsystems™ as of last week, and this is not to discredit the TrueNAS® version of MinIO deployment. However, I come from the point of the customer who will want to consider the scope and level of details of the support SLA provided. This must be well defined in terms of keeping things running smoothly and well.

I also heard about many users, even large enterprise users, trying to DIY a MinIO cluster themselves and provisioning it in a production environment. Many customers often architected MinIO deployments inefficiently. Many who believe they can get a cheap open source storage solution will always encounter the perils of technical deficit at some point. When there is a service meltdown and data loss, crying for enterprise-grade support might be too late. Always, always, purchase vendor enterprise support.

Lastly, this is my project. I am doing this for fun. I am doing to help in my learning to architect the MinIO object storage solution well and I must know my options well enough. I am reminding all NOT to put this into production. Architect well and get good support.

MinIO on TrueNAS® CORE

I started enjoying MinIO on TrueNAS® CORE probably in 2016. I discovered it in FreeNAS™ version 11.2 and I wrote a chapter about it in my eBook which was released in 2020 (send me a comment with your email if you want the full PDF version). I am not going to work on this deployment. My friend, and ex-colleague at iXsystems™, Joe Dutka, has already done a wonderful job in the explainer video below:

Next steps

Obviously, I will get the MinIO MNMD deployed on TrueNAS® SCALE set up in my lab at some point. I have already tested the actual VM-based “bare-metal” MinIO SNMD and MNMD deployment schemes with Linux a couple of years back. The steps are not difficult as long as you are aware of what each instruction step is about and what it does progressively in its deployment. It’s like assembling Lego®. 😉

Taking this conversation to a higher level above to the non-technical crowd, the object storage market segment continues to garner more prominence and use cases as it breaks through the duopoly of SAN and NAS in the enterprise and hybrid cloud environments. However, in the part of my world, in Asia and South-East Asia, the object storage technology presence is still pigeon-holed into a “cheap-and-deep” secondary storage play. And for those organizations here, big and small, most just recognize object storage as “that S3 storage“.

It will take a while for the market here to approve Object Storage as the Primary Storage, as a high performance, high throughput storage infrastructure platform for cloud-native applications and of course, AI use cases as well. I am confident that will change in due time.

Tagged , , , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

3 Responses to Deploying a MinIO SNMD Object Storage Server in TrueNAS SCALE

  1. Michael says:

    how would your ideal 4-node proxmox cluster, minio setup, with nvmes drives look like? (let’s assume that you use k3s or say rancher, and you care about H/A) – would you run a 4-vm kubernetes cluster, and pin the nodes to the (physical) proxmox nodes?

    • cfheoh says:

      Hello Michael,

      I would always choose bare metal nodes (depending on your erasure set stripe of 2+2, … all the way to 14+2) for the MinIO cluster. This allows many advantages including resiliency, performance, especially for enterprise settings.

      However, I am pretty sure many homelab users will not have the luxury of minimum of 4 dedicated servers sitting around. Thus, the next logical choice for me is to 4 VMs across a 4-node Proxmox cluster for a 2+2 MinIO configuration, as long as the resiliency factor is designed into both the Proxmox and MinIO configurations.

      When it comes to containers though, the next consideration is to think of either a “direct attached” storage medium set or a “shared storage” medium set. Using shared storage has its pros, but also has its cons. I have my reservations about CSI drivers, because I consider them as “shims” but for now, they are the most prominent methods to connect containers in Kubernetes pods to a shared storage system.

      From that note, I have provided some advisory notes for a very large Oil & Gas company setting up a MinIO cluster of 6+2 in containers getting shared storage via NFS from a NetApp system. It worked but performance was totally down the drain. So, from this experience, the less “layers” between MinIO and the physical storage layer, the better.

      Even my blog using TrueNAS with MinIO on top of OpenZFS filesystem which assign one dataset per vdev (which is a parity RAID), isn’t a good idea as well. I was just having some fun with it because iXsystems (my previous employer) offers the solution with limited support.

      Whilst a bare metal MinIO can achieve pretty high throughput (not IOPS) with the right bare metal configurations (they claim 300+GB/sec read and 120+GB/sec write), the clustered MinIO is good for decent performance workloads but not so much in HPC environments. I have spoken to a few system integrators in Asia on AI/ML workloads and it has fell short.

      So, it boils down to … what you want out of MinIO?

      Hope this helps. All the best.

  2. Michael says:

    Amazing answer! it’s so interesting to read this all. The best, most performant solution, is probably a pass through setup with Proxmox (like prox01-vm01-disk01, and prox02-vm02-disk02, etc.) I’ll probably go this route, but with 6+2 as you explained. (thank you!) I also consider this: :https://github.com/sergelogvinov/Proxmox-csi-plugin. So you would still abstract the storage layer, a bit. Rely on Proxmox for ZFS, create “big” fat, VMs, like with a size of say 2 TB, and put the 4 servers in a Proxmox pool. The benefit here is that VMs are not pinned, and can move around. So it’s very flexible, very H/A. It may even perform quite well. Haven’t tested this though!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.