Thursday, March 21, 2013

NFS vs. Block (FC/iSCSI) Protocol

Hi All,

Protocol seems to be one of those topics you don't discuss over dinner, like politics!  Storage folks tend to be one or the other NFS or Block and I've heard some very heated debates on this topic.  Yep, nerd fights!  I used to be a die hard block guy, especially for enterprise storage.  NFS was cool for file sharing or home directories, but BLOCK was king when it came to databases, mission critical applications, etc.  When I went to work for NetApp my opinion quickly changed, especially when I began working on virtualization products like VMware.

When I'm talking with customers I'm frequently asked, "What is better block or NFS?"  It's a bit loaded question because the customer usually has an idea of what they like better, plus they've probably already spent a ton of money on a new infrastructure.  The last thing I want to do is call their baby ugly!  I usually tell them *I* prefer NFS for these reasons, and I list out some reasons.  But in case they're a block shop, I remind them that NetApp can do both NFS and block concurrently. :-)

I was reading our internal discussion groups this morning and saw a great post by Nick Triantos.  For those of you that don't know Nick, he's a brilliant guy and an avid blogger.  I'm always amazed how quickly and precisely he knows the answers to things that would take me much more time to find out!  He answered a question on NFS vs. iSCSI for VMware this morning and I'd like to share it with you.  Enjoy!

"Datastore resizing is another difference...With NFS you can resize up or down on the fly. with VMFS you can only increase the size of the datastore. That means if you ever need to rebalance a VMFS datastore by Storage VMotioning VMs to another datastore, you now have captive storage you can't reuse, unless you create a new datastore of the required size, move your VMs into it and destroy the old one. Is it a hard thing to do? No. Does it require additional steps on the server and the storage side? Yes.

Deduplication, post process dedup that is, over NFS, allows a vmware admin to immediately realize the space savings on the host without any additional work. The same is not true for block protocols as additional work needs to be done on the storage array.

No limitation on NFS as to the datastore size. The limit is whatever the file server support. The same is not true for block protocols where the datastore size imposed is at 64TB. Although I find hard to believe one will create a 64TB datastore anyway.

Also,  each virtual disk file on NFS has its own I/O queue directly managed by the NFS server. This is not true for block protocols which have per LUN queues and can become a point of IO contention. All that translate to higher fan-in ratios in terms of the number of VM to an NFS datastore vs VMFS datastore. Partners have been telling me for a long time they have customers with 250 and 300 VMs in a single NFS datastore. The only way you can pack that many VMs in a single VMFS datastore without issues is if they are powered off. :-)

The benefit of NFS is day to day operational efficiency and granularity. The architecture does require some thinking upfront and is largely dependent on the switching infrastructure, but once you lay it down, everything else is a breeze.

In the interest of full disclosure...there are some caveats with NFS...No support by Microsoft for Exchange deployments, although at VMworld, last year, I met with customers that ignored the support statement and have been running with it with no issues. You also can't use Microsoft Failover Clusters (you can't use it with Native vSphere iSCSI either). So as long as these are not required, NFS is the right choice, IMO.

Last but not least, considering that Virtual Machines are comprised of a bunch of files, why would use a block protocol to manage them to begin with, if you had a choice?"


  1. how do you get around the fact that LAG groups in vmware esx(i) 4/5/5.1 don't work for the NFS client?
    I would love to just be able to buy 10Gb ethernet for everything but well its a tad cost prohibitive still...

  2. I have a Nexenta SAN, and so can test both ways of building a store. Here are the key differentiators (I have actually verified these with ESXi/Vsphere 5.0 and 5.1, including 5.1u1):
    NFS cannot do powered on vmotion, iSCSI can.
    Powered off vmotion is about 10x faster on iSCSI than NFS.
    Compression and Deduplication is better on NFS than iSCSI.
    Since NFS is a real filesystem, using standard backup to back up the VMDKs is easy, not so over iSCSI.
    Nexenta snapshots can be done on a live shared NFS system, but not on an iSCSI (has to be unshared).
    ISCSI is the only way to do native OS clustering in Windows (to the VMs, not the hypervisor), but requires configuring security (either user or initator IP) for ALL iSCSI shares (otherwise windows will try to hook the VMFS shares as well).
    MySQL to NFS is awful.

    1. I have had no issues with powered on vmotions on netapp with nfs. Is this a limitation of nexenta?

  3. Yes, but when the requirement is performance, NFS is not a real option, sorry, but it is still not the case, FC/Block is still the king for enterprise mission critical applications.

    1. This is definitely not true. I work at a Fortune 500 global company and 100% of our VMWare infrastructure is on NFS storage and we virtualize everything, including Exchange, SAP, Oracle DB (thousands of them). We have higher performance, better space utilization and a much simpler infrastructure because we don't have Fiber Channel, FCoE or iSCSI polluting our VMWare infrastructure.

    2. FC is the only option for performance and latency, sorry but the numbers speak for themselves don't see nfs doing millions of IOPS anywhere, and even netapp is starting to reverse course on the NFS side and push some of the features they've had onto ISCSI because they are loosing customers to EMC/PURE/NIMBLE.

    3. I think you went to sleep and woke up in year 2000 my friend. I assume you've heard to 10GE. Take a look at Tintri and the IOPs you can push through a 3U array. VMware will add NFS v4x at some point effectively taking any of the resiliency advantages that block has away.

  4. I very much like this article but one thing I never understand is why people never consider the backup element between these Protocols when using Snap Manager Tools. My experience this is a fundamental decision on what to use rather than performance and storage benefits as not all Tools are supported on NFS.

  5. I just currently deployed a NFS VMware 5.5 solution on netapp. I'm blown away at the performance and ease of administration on the storage side. Regarding the post about backups, I agree that you have to factor that into your strategy on when you architect your environment

  6. Oh forgot to mention we are sitting on dell blades and NFS is going through Cisco nexus 10g. Our vmotions and storage vmotions happen fast.