NAS (network attached storage) is obviously the file-level workhorse for shared resources in the network of any organization. SMB (server message block) for Windows environments and NFS (network file system) for Linux platforms are the 2 most prominent protocols that rule the NAS world. Of course we have SMB implementations in the form of Samba and others in non-Windows, Linux and NFS implementations in Windows as well.
As the versions of both network file sharing protocols iterated, present versions of SMB v3.x and NFS v4.x (NFS v3 on the supported Linux kernel version) on the client-side have evolved well. Both now have enhanced resiliency and performance improvement features. And there is an underlying similarity of both implementations. This blog looks at the client-side architectures of both.
One TCP connection
NAS is a client-server architecture. Over the network, NAS clients (SMB or NFS) access their corresponding NAS server(s) – SMB or NFS server(s) – through the TCP/IP network.
One very important key starting point to note is the use of one TCP connection per NAS client to the NAS server relationship. For both SMB and NFS, there is just one TCP link between client and the server even if there are several SMB mapped shares or NFS mount points respectively on the clients.
For a long time, this one TCP connection is sufficient for the NAS traffic. But as the network file accesses grow, this connection becomes both a single point of failure as well as a performance bottleneck.
More than one TCP connection
Developers of both protocols recognize this issue. Early on, as in Linux kernel 5.3 (2019), and SMB 3.0 in Windows 2012 R2/Windows 8, the multi-TCP connections capability was introduced. This allows one SMB session or one NFS session to run across multiple TCP connections.
This feature is known as client-side SMB Multichannel and NFS nconnect respectively. Below are 2 diagrams that show the architectures of SMB Multichannel and NFS nconnect.
The SMB and NFS sessions traffic are multiplexed across the multiple TCP connections between the NAS client and the NAS server. This multiplexing effect allows the NAS client to load balance the NAS packets across all available TCP connections, ushering higher performance for the SMB and the NFS deliveries respectively. Should one TCP connection become unavailable, the NAS sessions continue without disruption, resulting in higher network resiliency for file sharing.
Things to note.
The simple explanation so far is based on vanilla client-side implementations of both SMB and NFS. The similarity here is using multiple TCP connections for one or more SMB and NFS sessions respectively. There are additional things to note.
SMB Multichannel supports RSS (receive side scaling)-capable NIC cards, that allows the TCP sessions to be distributed into multiple cores of the client-side processor (and the server side too). This, no doubt, removes the bottleneck at the CPU level, and further enhances the performance of SMB Multichannel. Do note that the necessary network driver is required for the RSS NIC. More RSS details for Windows® here.
There are limits to the number of TCP sessions allowed per client-server relationship. SMB Multichannel maxes at 32 while the NFS nconnect’s limit is 16.
Furthermore, there are unique requirements as well, depending on the client’s OS and the NAS server code as well. In the table example below, Azure NetApp Files has these requirements.
Thus it is important to validate the requirements for both SMB Multichannel and NFS nconnect, client and server alike. Your mileage may vary.
How does the performance fare
Naturally, we will be asking about the performance improvements for both respective features. Taken from NetApp® TR-4740 SMB Multichannel 3.0 Technical Report, the chart below shows a vast improvement between SMB sessions with and without SMB Multichannel turned on.
Similarly, from an older blog under the Pure Storage® banner, a comparison performance table was published for using the NFS nconnect feature. You can see the huge performance jump from approximately 1GB/sec throughput without nconnect to almost 7GB/sec with nconnect turned on.
Moving forward
This blog hopes to provide some basic foundational awareness of these NAS features on the client side. SMB and NFS client performance and resiliency continue to improve.
On the NAS server side, Parallel NFS (pNFS), which is not discussed here, provides a distributed high performance NFS services is one area to explore. The high performance capabilities of NFS Ganesha, a user-mode NFS server implementation is another. Huawei has a high performance NFS client and server technology called NFS+ released under their OpenEuler project. In the SMB world, Microsoft® already has a scale-out SMB cluster since Windows Server 2012.
The NAS performance and resiliency features are not exhaustive. I do not know all of them, and also all the vendors delivering high performance NAS, both on the server and the client side. Both NAS protocols keep improving. Growing in tandem is the resiliency and the performance of both protocols. These are exciting times.
I have a customer that was an early adopter on Pure and say some great benefits on NFS from Linux clients!
Yes, and many other NFS vendors using the NFS nconnect will find great benefits. Please share my blog with your circle. Thanks Paul!