From the past to the future

2019 beckons. The year 2018 is coming to a close and I look upon what I blogged in the past years to reflect what is the future.

The evolution of the Data Services Platform

Late 2017, I blogged about the Data Services Platform. Storage is no longer the storage infrastructure we know but has evolved to a platform where a plethora of data services are served. The changing face of storage is continually evolving as the IT industry changes. I take this opportunity to reflect what I wrote since I started blogging years ago, and look at the articles that are shaping up the landscape today and also some duds.

Some good ones …

One of the most memorable ones is about memory cloud. I wrote the article when Dell acquired a small company by the name of RNA Networks. I vividly recalled what was going through my mind when I wrote the blog. With the SAN, NAS and DAS, and even FAN (File Area Network) happening during that period, the first thing was the System Area Network, the original objective Infiniband and RDMA. I believed the final pool of where storage will be is the memory, hence I called it the “The Last Bastion – Memory“. RNA’s technology became part of Dell Fluid Architecture.

True enough, the present technology of Storage Class Memory and SNIA’s NVDIMM are along the memory cloud I espoused years ago.

What about Fibre Channel over Ethernet (FCoE)? It wasn’t a compelling enough technology for me when it came into the game. Reduced port and cable counts, and reduced power consumption were what the FCoE folks were pitching, but the cost of putting in the FC switches, the HBAs were just too great as an investment. In the end, we could see the cracks of the FCoE story, and I wrote the pre-mature eulogy of FCoE in my 2012 blog. I got some unsavoury comments writing that blog back then, but fast forward to the present, FCoE isn’t a force anymore.

Weeks ago, Amazon Web Services (AWS) just became a hybrid cloud service provider/vendor with the Outposts announcement. It didn’t surprise me but it may have shook the traditional systems integrators. I took the stance 2 years ago when AWS partnered with VMware and juxtaposed it to the philosophical quote in the 1993 Jurassic Park movie – “Life will not be contained, … Life finds a way“.

Continue reading

Supercharging Ethernet … with a PAUSE

It’s been a while since I wrote. I had just finished a 2-week stint in Melbourne, conducting 2 Data ONTAP classes and had a blast.

But after almost 3 1/2 months of doing little except teaching NetApp classes, the stint is ending. I wanted it that way, to take a break and also to take on a new challenge. I will be taking on a job with Hitachi Data Systems, going back to the industry that I have termed the “Wild, wild west”. After a 4 1/2-year hiatus, I think that industry still behaves the way it is .. brash, exclusive, rich! The oligarchy of the oilmen are still laughing their way to the banks. And it will be my job to sell storage (and cloud) solutions to them.

In my Netapp (and EMC) engagements in the past 6 months, I have seen the greater adoption of iSCSI over Fibre Channel, and many has predicted that 10Gigabit Ethernet will be the infliction point where iSCSI can finally stand shoulder-to-shoulder with Fibre Channel. After all, 10 Gigabit/sec is definitely faster than 8 Gigabit/sec Fibre Channel, right? WRONG! (I am perfectly aware there is a 16 Gigabit/sec Fibre Channel, but can’t you see I am trying to start an argument here?)

Delivering SCSI data load over iSCSI on 10 Gigabit/sec Ethernet does not necessarily mean that it would be faster than delivering the same payload over 8 Gigabit/sec Fibre Channel. This statement can be viewed in many different ways and hence the favourite IT reply would be … “It depends“.

I would leave this performance argument for another day but today we are going to talk about some of the key additions to supercharge 10 Gigabit Ethernet for data delivery in storage networking capacity. In addition, 10 Gigabit Ethernet is the primary transport for Fibre Channel over Ethernet (FCoE) and it is absolutely critical that 10 Gigabit Ethernet must be close to as reliable as Fibre Channel for data delivery in a storage network.

Ethernet is a non-deterministic protocol, and therefore, its delivery result is dependent on many factors. Likewise 10 Gigabit Ethernet has inherited part of that feature. The delivery of data over Ethernet can be lossy, i.e. packets can get lost and the upper layer application protocols will have to respond to detecte the dropped packets and to ensure lost packets are redelivered to complete the consignment. But delivering data in a storage network cannot be lossy and in most cases of SANs, the requirement is to have the data arrive in the sequence they were delivered. The SAN fabric (especially with the common services of Layer 3 of the FC protocol stack) and the deterministic nature of Fibre Channel protocol were the reasons many has relied on Fibre Channel SAN technology for more than a decade. How can 10 Gigabit Ethernet respond?

Continue reading

The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Continue reading

10Gigabit Ethernet will rule

As far as how the next generation storage networks would look like, 10Gigabit Ethernet (10GbE) is definitely the strongest candidate for the storage network. And this is made possible with key enhancements to Ethernet that has made it possible for greater reliability and performance. This enhancement goes by several names such as Data Center Ethernet (a term coined by Cisco) and Converged Enhanced Ethernet (CEE). But probably the more widely use term is DCB or Data Center Bridging.

Ethernet, so far, has never failed to deliver and as far as I am concerned, Ethernet will rule for the next 10 years or more. Ethernet has evolved several generations from Ethernet running at 10Mbits/sec to FastEthernet, then Gigabit Ethernet and now 10Gigabit Ethernet. Pretty soon, it will be looking at 40Gbits/sec and 100Gbits/sec. It is a tremendous piece of protocol, allowing it to evolve and adapt to the modern data networks.

But before 10GbE, the delivery of packets were of best effort basis. But today’s networks demand scalability, security, performance and most of reliability. However, since the advent of DCB, 10GbE is fortified with these key technologies

  • 10GBASE-T – using Cat 6/6A cabling standards, 10GBASE-T delivers low cost, simple UTP (unshielded twisted pair) networking to the masses
  • iWARP – Support for iWARP is crucial for RDMA (Remote Direct Memory Access). RDMA, in a nutshell, reduces overhead of typical networking buffer-to-buffer copy, by bypassing these bottlenecks, and placing the data blocks and its bits/bytes directly into the access points of the corresponding requesting node.
  • Low latency cut-switching at Layer 2 by reading just the header of the packet instead of the entire full length of the packet. The information contained in the header of the packet is sufficient for it to make a switching/forwarding decision
  • Energy Efficient by introducing low power idle state and other implementations which makes the power consumption usage more proportional to the network utilization rate
  • Congestion notification and pause frame which handles 8 different classes of traffic to ensure lossless network delivery
  • Shortest path adaptive routing protocol for Ethernet forwarding. TRILL (Transparent Interconnections with Lots of Links) is one of the implementation. Lately OpenFlow has been jumping into the bandwagon as a viable option but I need to check out OpenFlow support with 10GbE and DCB.
  • FCoE (Fibre Channel over Ethernet) is all the rage these days and 10GbE has the ability to carry Fibre Channel traffic. This has sparked a initial frenzy among storage vendors.

Of course, last but not least, we are already seeing the sunset of Fibre Channel. While 8Gbps FC has been out for a while, its adoption rate seemed to have stalled. Many vendors and customers are at the 4Gbps range, adopting a wait-and-see game. 16Gbps FC has been in the talks but it seems that all the fireworks are with 10Gigabit Ethernet right now. It will rule …