The world has pretty much settled that hybrid cloud is the way to go for IT infrastructure services today. Straddled between the enterprise data center and the infrastructure-as-a-service in public cloud offerings, hybrid clouds define the storage ecosystems and architecture of choice.
A recent Blocks & Files article, “Broadcom server-storage connectivity sales down but recovery coming” caught my attention. One segment mentioned that the server-storage connectivity sales was down 9% leading me to think “Is this a blip or is it a signal that Fibre Channel, the venerable SAN (storage area network) protocol is on the wane?”
Thus, I am pondering the position of Fibre Channel SANs in the cloud era. Where does it stand now and in the near future?
Respect for Fibre Channel
I have done quite a number of large FC SAN implementations between 1993-2013. These included probably the first FC SAN (based on FC-AL mode) for Sun Microsystems Malaysia for Hock Hua Bank in Sibu, Sarawak in 1993, and probably the largest storage project in the world at the time for Shell IT International GUSto (Global Unified Storage) project with Hitachi Data Systems and NetApp®, at their mega data centers in Houston, Amsterdam and Cyberjaya in 2006. I was SNIA® certified FC-SAN Professional in 2002, and I was SNIA® course instructor for a while. So, I am quite comfortable with FC, even the deep dive into the 5 layers of the FC protocol stack and the description of the 2048-byte FC frame. Heck, I even memorized the 8b/10b encoding table at one point.
So I have a lot of respect for Fibre Channel.
Fibre Channel plusses
Fibre Channel is Generation 7 now running with 64Gbps with Gen 8 128Gbps in 2-3 years. If I were to count the wins of Fibre Channel,
- Reliable – FC is serial. Unlike parallel data interfaces and specifications in the past such as SCSI 1/2, parallel ATA, HIPPI, which have jitters and skews, FC can provide deterministic lossless delivery with a high degree of reliability. There are unique specifications such as the 8/10b, 64/66b, 128/130b encoding that make the protocol and the delivery of data using FC incredibly reliable
- Low latency, High throughput – The protocol is highly efficient relative to the TCP/IP communications framework. This has resulted in its ability to scale for a high performance throughput storage network for the SAN.
- Services integrated – Unlike IP-based storage networks, many services such as naming services, auto-discovery, auto resource adjustment on-the-fly, security, encryption are already designed and built into the FC protocol
- Secure – There are a lot of security features built into the FC framing services including end-to-end encryption, services authentication (PLOGI, FLOGI are well known), zoning (soft/WWN and hard/port), LUN masking, key management and more
- Strong roadmap and future – Under the stewardship of FCIA (Fibre Channel Industry Association), there is a lucid defined roadmap for Fiber Channel, not just increasing speed to 128, 256 and 512Gbps and possibly 1Tbps (yeah, Terabits per second), but also increase in single channel and/or quad channel giga-baud rates accordingly, and new modern features to take FC to the next and next generation. The NVMe-over-Fibre Channel (NVMe-o-FC) implemented through the FC-NVMe-2 specifications exemplified the well defined route for FC to adopt the burgeoning NVMe storage protocol that has been leap frogging over SCSI-3 devices and end points in the storage industry.
Fibre Channel minuses
I had to look at the present pandemic and see the impact on Fibre Channel’s future.
- Costs – FC is a premium solution. It is not for the small and medium sized organizations unless they have a strong reason to adopt FC. Implementing HBAs, fabric switch, OM cables, replication over long distances are going to hit the company’s purse strings, not to mention future upgrades.
- Hyperscaling – In today’s hyperscalers, scale-out cloud-like data centers, hyperconverged infrastructure (HCI) platforms which demand agility and devops type of ecosystem, many would find FC complex and difficult to scale at the instance demanded by the new generation of infrastructure scalability.
- New workloads – The nature of data has certainly changed. New workloads such as unstructured data involving data and metadata for DL/ML (deep learning/machine learning), new types of commercial HPC (high performance computing), edge computing, streaming data, object storage, and other uber-dynamic workloads would be challenging for FC SAN architectures.
- Kubernetes – A subset of new workloads, Kubernetes-driven applications and platforms, whilst trying to gel persistent storage services in its initial transient serverless designs, will sidestep FC as the target of its PVCs (persistent volume claims), preferring iSCSI and NFS as the primary network storage protocols of choice. API-driven, infrastructure-as-code now dictate the new generation of storage provisioning services, and FC is unlikely to have a place with the new development platforms.
Niched future?
I try to procure a balanced view of how the changing landscape has and will affect Fibre Channel’s journey. It is really hard to draw a brighter future for Fibre Channel, where “Cloud First” mentality is leading the digital transformation initiatives. The demand for Fibre Channel, despite Broadcom/Brocade last quarter’s blip, remains as strong as ever where the number of FC ports sold at both the switches and adapters segments have been growing steadily year-over-year.
But the younger generation is beginning to lose the knowledge and experience of Fibre Channel SANs, as exposures to the dominant enterprise-grade storage networking protocol have been diminishing. Pioneering FC companies like Vixel, Ancor Communications, Jaycor Networks, Emulex, Nishan Systems, McData, Maxstrat, Crossroads Systems hardly invite a blink anymore, and only tell a tale of the past, unknown or forgotten.
There is a fear in me. The premium value has put FC into a niched technology space now. As we move further and farther into the clouds, I am afraid that the future of FC is cloudy … with a chance of the technology pendulum swinging back to return FC to its glory days.
Pingback: Random Short Take #61 | PenguinPunk.net
Why would you even use FC these days? Why bother with training people and buying specialized equipment for DC’s when you can just have ethernet, which everyone uses and knows? Realistically, what are the benefits, except for a few packets dropped(and recovered) here and there? I’m just studying for an DC exam and I’m genuinely curious….why? It feels like it only overcomplicates the network.
When FC appeared in the 90s, Ethernet was at a ridiculous speed of 10 Mbit/sec. FC was about 1 Gbit/sec at that time and moving into 2. FC-SAN (fabric not FC-AL) was just ramping up and FC, given its superior edge at that time took off. When iSCSI was ratified in 2003, iSCSI has performance challenges on the Ethernet as well. That was when the TOE cards and later on, the iSCSI HBAs started to show up. By the time 10Gbit/sec Ethernet came, iSCSI still has challenges because FC was still having the technology edge. After 10GbE started to include things like DCBX, PFC and few more features that looked like FC’s technology, Ethernet started to become the dominant transport of choice for storage. Infra-as-Code almost made software defined storage ops more meaningful and automated, and thus the future of FC today is challenge.
But there are still a lot of legacy implementations of FC. FC-NVMe along with other NVMeoF transports are jostling to become the king but it is pretty obvious who will triumph.