I was at the RedHat Forum last week when I chanced upon a conversation between an attendee and one of the ECS engineers. The conversation went like this
Attendee: Is the RHEV running on SAN or NAS?
ECS Engineer: Oh, for this demo, it is running NFS but in production, you should run iSCSI or Fibre Channel. NFS is only for labs only, not good for production.
Attendee: I see … (and he went off)
I was standing next to them munching my mini-pizza and in my mind, “Oh, come on, NFS is better than that!”
NAS has always played a smaller brother to SAN but usually for the wrong reasons. Perhaps it is the perception that NAS is low-end and not good enough for high-end production systems. However, this is very wrong because NAS has been growing at a faster rate than Fibre Channel, and at the same time Fibre Channel growth has been tapering and possibly on the wane. And I have always said that NAS is a better suited protocol when it comes to unstructured data and files because the NAS protocol is the new storage networking currency of Internet storage and the Cloud (this could change very soon with the REST protocol, but that’s another story). Where else can you find a protocol where sharing is key. iSCSI, even though it has been growing at a faster pace in production storage, cannot be shared easily because it is block-based.
Now back to NFS. NFS version 3 has been around for more than 15 years and has taken its share of bad raps. I agree that this protocol is still very much in the landscape of most NFS installations. But NFS version 4 is changing all that taking on the better parts of the CIFS protocol, notably the equivalent of opportunistic locking or oplocks. In addition to that it has greatly enhanced its security, incorporating Kerberos-type of authentication. As for performance, NFS v4 added in a compounded in a COMPOUND operations for aggregating operations into a single request.
Today, most virtualization solutions from VMware and RedHat works with NFS natively. Note that the Windows CIFS protocol is not supported, only NFS.
This blog entry is not stating that NFS is better than iSCSI or FC but to give NFS credit where credit is due. NFS is not inferior to these block-based protocols. In fact, there are situations where NFS is better, like for instance, expanding the NFS-based datastore on the fly in a VMware implementation. I will use several performance related examples since performance is often used as a yardstick when these protocols are compared.
In an experiment conducted by VMware based on a version 4.0, with all things being equal, below is a series of graphs that compares these 3 protocols (NFS, iSCSI and FC). Note the comparison between NFS and iSCSI rather than FC because NFS and iSCSI run on Gigabit Ethernet, whereas FC is on a different networking platform (hey, if you got the money, go ahead and buy FC!)
Based a one virtual machine (VM), the Read throughput statistics (higher is better) are:
The red circle shows that NFS is up there with iSCSI in terms of read throughput from 4K blocks to 512K blocks. As for write throughput for 1 VM, the graph is shown below:
Even though NFS suffers in write throughput in the smaller blocks less than 16KB, NFS performance write throughput improves over iSCSI when between 16K and 32K range and is equal when it is in 64K, 128K and 512K block tests.
The 2 graphs above are of a single VM. But in most real production environment, a single ESX host will run multiple VMs and here is the throughput graph for multiple VMs.
Again, you can see that in a multiple VMs environment, NFS and iSCSI are equal in throughput, dispelling the notion that NFS is not as good in performance as iSCSI.
Oh, you might say that this is just VMs without any OSes or any applications running in these VMs. Next, I want to share with you another performance testing conducted by VMware for an Microsoft Exchange environment.
The next statistics are produced from an Exchange Load Generator (popularly known as LoadGen) to simulate the load of 16,000 Exchange users running in 8 VMs. With all things being equal again, you will be surprised after you see these graphs.
The graph above shows the average send mail latency of the 3 protocols (lower is better). On the average, NFS has lower latency than iSCSI, better than what most people might think. Another graph shows the 95th percentile of send mail latency below:
Again, you can see that the NFS’s latency is lower than iSCSI. Interesting isn’t it?
What about IOPS then? In another test with an 8-hour DoubleHeavy LoadGen simulator, the IOPS graphs for all 3 protocols are shown below:
In the graph above (higher is better), NFS performed reasonably well compared to the other 2 block-based protocols, and even outperforming iSCSI in this 8-hour load testing. Surprising huh?
As I have shown, NFS is not inferior compared to the block-based protocols such as iSCSI. In fact, VMware in version 4.1 has improved all 3 storage protocols significantly as mentioned in the VMware paper. The following are quoted in the paper for NFS and iSCSI.
- Using storage microbenchmarks, we observe that vSphere 4.1 NFS shows improvements in the range of 12–40% for Reads,and improvements in the range of 32–124% for Writes, over 10GbE.
- Using storage microbenchmarks, we observe that vSphere 4.1 Software iSCSI shows improvements in the range of 6–23% for Reads, and improvements in the range of 8–19% for Writes, over 10GbE
The performance improvement for NFS is significant when the network infrastructure was 10GbE. The percentage jump between 32-124%! That’s a whopping figure compared to iSCSI which ranged from 8-19%. Since both protocols are neck-to-neck in version 4.0, NFS seems to be taking a bigger lead in version 4.1. With the release of VMware version 5.0 a few weeks ago, we shall know the performance of both NFS and iSCSI soon.
To be fair, NFS does take a higher CPU performance hit compared to iSCSI as the graph below shows:
Also note that the load testing are based on NFS version 3. If version 4 was used, I am sure the performance statistics above will take a whole new plateau.
Therefore, NFS isn’t inferior at all compared to iSCSI, even in a 10GbE environment. We just got to know the facts instead of brushing off NFS.
There was a joint multiprotocol test done by VMware and NetApp, comparing performance of datastores on 4Gbps FC, 1Gbps iSCSI and NFS. The results show a max 9% difference between all 3 protocols. A bit dated, but a good comparison nonetheless. In fact, a lot of our larger, more complex deployments are on NFS.
http://media.netapp.com/documents/tr-3697.pdf
Hi William
Thanks for sharing … If you got any good articles to promote NetApp, let me know. The HP guys wants me to do a write up about their P6000 and IBM is thinking along the line.
/Chin Fah
Actually, it’s interesting to see something else – there is no discernible performance difference between SW iSCSI and HW iSCSI! So what’s all these “offload” engines or whatnots I’ve been hearing about?
Hi Tien
It’s is very interesting to note that. Perhaps the hypervisor isn’t taking advantage of TOE capabilities like other OSes.
I have an idea. How would like you to be a guest writer on this blog? I know from Ammar and Deo that you are pretty good with storage technologies and I think you can share some of your experiences and knowledge.
I am invited several people to become guest writers as well and eventually, I will consolidate both the FB group and this blog into one. Then I will turn it over to SNIA Malaysia and let is grow from there. How about that?
Thanks
/Chin Fah
Pingback: NFS-phobic in Malaysia | Storage Gaga
Thank you for your tests and comparison. I am also quite a big fan of NFS vs iSCSI because it gives me much more power to work with files directly on the Storage System without mounting the file system. But in these tests I would be interested how the results would look like when the ethernet would be 10Gb/s ethernet instead of 1Gb/s ethernet.
Hello
That blog was more than 12 years ago. Much have changed.
NFS may have ease of administration, scripting (perhaps some automation) but iSCSI delivers lighter load over the network. So from an end-to-end perspective, iSCSI is likely to perform better than NFSv3. In NFSv4, there are improvements but the default mount options are often not optimized for performance. So some level of NFS understanding is desired to achieve better performance than most NFS implementations.
Given that most compute systems are already many multi-cores and multi-threads, and a very large RAM footprint, the processing impact of iSCSI at the hypervisor level is negligible. It still depends on VMs load designs etc.