The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Looking into it deeper, the FCoE Host Bus Adapter (HBA) has 2 set of “engines” to process Ethernet traffic and Fibre Channel traffic. When encapsulated within the Ethernet frame, the packaging looks like this:

But FCoE is not getting cheap fast enough to encourage a mass adoption of the technology. At the same time, the virtualization forces have been gaining strength and moment. When VMware purchased Nicira for USD$1.26 billion a couple of weeks ago, a new buzzword leaped into the fore – “Software Defined Networking (SDN)“. That was probably the final nail to FCoE’s coffin, as the IT world shifted its focus from the “unifying story” of FCoE to SDN, the new darling of the networking world.

OK, OK, maybe not exactly SDN per se, because a week later after the VMware acquisition of Nicira, Oracle decided to purchase Xsigo for perhaps the same reason(?). Perhaps that was what Oracle needed after seeing that Project Crossbow from Sun wasn’t going anywhere fast. That’s probably because all their core engineers and developers left for obvious reasons.

Xsigo isn’t exactly SDN. They are a I/O virtualization (IOV) technology just like Virtensys, but then again it could be SDN because it virtualizes the physical networking interfaces into many virtual networking interfaces. It can be Ethernet or FC-HBA, or even InfiniBand. Software is the technology that abstracts the underlying networking architecture. Here’s a short video of what Xsigo do:


This means that if SDN or IOV has anything to say about this, the next battleground for networking and storage networking will be virtualized network resources. Server virtualization such as VMware and Hyper-V already have network virtualization in the Compute Layer, with concepts such as vNICs, vHBAs and vSwitches. The Networking Layer dominated by Cisco and Juniper will have plenty to say with new disruptive technologies such as OpenFlow.

If the switching companies such as Cisco and Juniper and to a lesser case, Xsigo decides to use 10 Gigabit Ethernet or the future 40/100 Gigabit Ethernet as the only wire from the storage layer to compute layer, and deliver iSCSI over Ethernet (for now) and perhaps ditch the iSCSI overheads in the future for SCSI-over-Ethernet (is there such a technology yet?) ala ATA-over-Ethernet (AoE), the Fibre Channel protocol itself could be at risk of losing its shine. After all, all we want is to send and receive ATA or SCSI command sets over the wire, isn’t it?

Yeah, yeah, I know someone going to knock me for saying this is a stupid idea, because where’s TCP/IP? Where’s the addressing bit to ensure that the SCSI PDU (protocol data units) and the CDB (command data blocks) arrive at the correct destinations?

If TCP is able to establish a session once IP has found the target, then it would not be too difficult to work out a point-to-point connection over the TCP session between the target and the initiator and then offload to RDMA (remote direct memory access) do its job sending and receiving PDUs and CDBs efficiently. This idea might just work.

IF SCSI-over-Ethernet can be defined with the SDN or IOV technology, and that would be certainly the death knell for FCoE. Anyone heard of the Insieme spin-in project of Cisco? You can read about it here and here.

Tagged , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

9 Responses to The beginning of the end of FCoE

  1. Pingback: The beginning of the end for FCoE « Storage Gaga

  2. A couple of points regarding FCoE. Talking just about bandwidth physical transport of the new Ethernet is not enough. Ine has to compare FC over FC (FC level 1 and 0) to FC over this new Ethernet (Data Center Bridging). This DCB has a life of its own regardless of FC using these new ethernet services. Those are progressing and is going to be successful, again regardless of FC. They provide new Ethernet function for FC and all else that maps to Ethernet including iSCSI/TCP/IP.

    I think you rdiagram showing FC coming out of the Ethernet Pipe could be improved by also showing iSCSI/TCP/IP as well as other Ethernet using protocols like FTP, NFS, CIFS for NAS enviroments (LAN over Ethernet) too.

    The stumbling block for me and FCoE is not so much an Ethernet problem. It is the requiremetn that all the services like zoning, discovery, management, Login are still done in a hardware based FC switch. Yes, with Pt -to-point FC there is the possibility with FCoE that the reliance on the FC switch might be reduced, but to move too FCoE requires and additional upgrade of FC switches not only for FCoE ports but remaining services as well. In many ways, FCoE is a disguise for preserving FC switches and the SAN services they provide. iSCSI can use iSCSI Storage Name Services and Discovery Domains to have a service that can run anywhere providing many of these services. In many ways that is the real benefit of iSCSI based solutions. It has nothing to do with Ethernet really. Ethernet will continue to be the bandwagon with 40 and 100 Gbps not far away. The CNA approach will not only serve IP, FC but also HPC with InfiniBand. The Ethernet development plan is one that will continue to develop high bit rates and high bandwdth. You must look at the higher level services to truly understand what is going on!

  3. Great post and love this blog!. I fully agree that FC doesn’t seem to be going anywhere fast but that the SDN perspective you mentioned puts an interesting spin on things. I would have to question whether FCoE is facing a death though, despite having the same opinion only a year or so ago. (Disclaimer) I currently work for VCE and we are deploying FCoE as a standard with our pre-integrated Cisco UCS and Fabric Interconnects with every single Vblock. Hence what we are seeing is that the adoption of FCoE is taking place quite significantly and successfully with minimal focus on the networking or protocol aspect but rather the management and operational benefits it brings to the business. With Cisco gaining ever increasing market share in the server space with the UCS, I don’t believe we’ll be seeing any disappearance of FCoE anytime soon.

    Another interesting point you’ve raised is about Oracle’s acquisition of Xsigo, does this now mean a complete reversal from their SAN / networking is dead approach with ExaData?

  4. Alan Yoder says:

    FCoE has the same fundamental problem that iSCSI does: cultural issues between the storage and networking teams in a big shop. Plus there are good data security reasons to keep the storage and application networks separate. Solution: use two networks (two wires). But that’s what we have today, and barring a significant price/performance difference, what’s the point of the change? I guess I could see it having a role over long distance based on performance alone if the pure FC guys fail to keep pace with wire speed improvements.

    IMO, SAS out the front end of the storage and PCIe out the back end are more disruptive potentially than FCoE.

    • cfheoh says:

      Hi Alan

      Thanks for your wise insights. There isn’t much motivation to switch to FCoE because the price/performance isn’t good enough to get the FCoE momentum going.

      However, do share more about what you said about PCIe in the back end. Appreciate your reply.

      Thank you
      /Chin-Fah

  5. anon says:

    only really viable “IP over xxx” is the IP over PCIe, via both board level, and vie external PCIe cabling (that standard has been out since the Gen 1.0, why nobody seems to acknowledge that in almost any tech site is beyond me).

    best products that have actual track record doing ePCIe cable interconnections within systems is, as far as I know, One Stop Systems, Inc..

    too bad those cards/cables are not available on any of the major resellers/distributors. 🙁

    all necessary PCIe non-transparent bridging and such things is already available on stock Linux kernel, so there isn’t any need to vendor-specific blobs either… in theory, no way of knowing as nobody seems to examine that.

    ps. yes, iSCSI over i-PCIe (one of the first published IP over PCIe stack vendors) works already, according to the i-PCIe people.

    • cfheoh says:

      Hi

      Interesting take on iSCSI over i-PCIe. First time I am aware of this. Fantastic stuff but unfortunately there is little market interest in extending ePCIe into the enterprise storage array market.

      Do share if there are new developments in this area.

      Thanks for your insight.
      /Chin-Fah

  6. Derix says:

    Hi ChinFah, sorry your description is bit too deep for me. Right now, we are evaluating whether to get a Cisco Nexus switch or normal Cat switch. The reason of considering Nexus is due to the FCOE thingy. As you may aware, we are running a SAN box connecting to 2 SAN switch. So i would like to check with you is it worth to go for Nexus (converged LAN n FCoE) or just let the FC run separately? Thanks in advance.

    • cfheoh says:

      Hello Derix

      Sorry for the late reply. I have been traveling a lot and haven’t had much looking at the comments in my blog.

      I have spoken to quite a lot of people in my travels, especially in India and Australia, and it appears that the take-up rate of FCoE isn’t exactly flying off the shelves. Many are still comfortable with the typical FC-SAN fabric and not investing in FCoE gear. Part of that is the FCoE cost, and another reason is many are still hoping for 10GBASE-T rather than fiber optics network.

      Again, I cannot comment on the future of your network because I do not know the business and operational objectives. If there is a need to go green and save money in power, cooling and rackspace, then FCoE is the way to go. Unfortunately it is difficult to explain to management about these cost savings because you need a tool/utility/spreadsheet to calculate utility savings, PUE and other statistics in a tangible manner. Spreadsheet is probably the last resort because the data collection is likely manual and productivity of the IT folks drop.

      On a technical front, if you have a large network and plan to have more VSANs, then VLAN limit of 4094 becomes a factor. VSAN must be part of a VLAN in the Cisco CEE network.

      Hope that helps. All the best

      /Chin-Fah

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.