The rise of RDMA

I have known of RDMA (Remote Direct Memory Access) for quiet some time, but never in depth. But since my contract work ended last week, and I have some time off to do some personal development, I decided to look deeper into RDMA. Why RDMA?

In the past 1 year or so, RDMA has been appearing in my radar very frequently, and rightly so. The speedy development and adoption of NVMe (Non-Volatile Memory Express) have pushed All Flash Arrays into the next level. This pushes the I/O and the throughput performance bottlenecks away from the NVMe storage medium into the legacy world of SCSI.

Most network storage interfaces and protocols like SAS, SATA, iSCSI, Fibre Channel today still carry SCSI loads and would have to translate between NVMe and SCSI. NVMe-to-SCSI bridges have to be present to facilitate the translation.

In the slide below, shared at the Flash Memory Summit, there were numerous red boxes which laid out the SCSI connections and interfaces where SCSI-to-NVMe translation (and vice versa) would be required.

Continue reading

Let’s smoke the storage peace pipe

NVMe (Non-Volatile Memory Express) is upon us. And in the next 2-3 years, we will see a slew of new storage solutions and technology based on NVMe.

Just a few days ago, The Register released an article “Seventeen hopefuls fight for the NVMe Fabric array crown“, and it was timely. I, for one, cannot be more excited about the development and advancement of NVMe and the upcoming NVMeF (NVMe over Fabrics).

This is it. This is the one that will end the wars of DAS, NAS and SAN and unite the warring factions between server-based SAN (the sexy name differentiating old DAS and new DAS) and the networked storage of SAN and NAS. There will be PEACE.

Remember this?

nutanix-nosan-buntingNutanix popularized the “No SAN” movement which later led to VMware VSAN and other server-based SAN solutions, hyperconverged techs such as PernixData (acquired by Nutanix), DataCore, EMC ScaleIO and also operated in hyperscalers – the likes of Facebook and Google. The hyperconverged solutions and the server-based SAN lines blurred of storage but still, they are not the usual networked storage architectures of SAN and NAS. I blogged about this, mentioning about how the pendulum has swung back to favour DAS, or to put it more appropriately, server-based SAN. There was always a “Great Divide” between the 2 modes of storage architectures. Continue reading

Boosting Solid States beyond SATA

Lately, I have been getting deeper and deeper into low-level implementation related to storage technologies. In my previous blog, I was writing my learning adventure with Priority Flow Control (PFC) and intend to further the Data Center Bridging concepts with future blog entries.

Before I left for Sydney for a holiday last week, I got sidetracked into exciting stuff that’s happening in my daily encounters with friends and new friends. 2 significant storage related technologies fell onto my lap. One is NVMe (Non-Volatile Memory express) and the other FPGA (Field Programmable Gate Array).

While this blog is going to be about NVMe, I actually found FPGA much more exciting to me. Through conversations, I found that there are 2 “biggies” in the FPGA world, and they are designed and manufactured by Xilink and Altera. I admit that I have not done my homework on FPGA yet, having just returned from Sydney last night. I will blog about FPGA in future blogs.

But NVMe is also an important technology direction to the storage world as well.

I think most of us are probably already mesmerized by solid state drives. The bombardment of marketing, presentations, advertising and whatever else the vendors do to promote (and self-promote) solid state drives are inundating the intellectual senses of consumers and enterprises alike. And yet, many vendors do not explain both the pros and cons of integrating solid states into their IT environment. Even worse, many don’t even know the strengths and weaknesses of solid states, hence creating some exaggeration that continues to create a spiral vortex of inaccuracies. Like a self-feeding frenzy, the industry seems to have placed solid state storage as the saviour of the enterprise storage world. Go figure with that!

Continue reading

Supercharging Ethernet … with a PAUSE

It’s been a while since I wrote. I had just finished a 2-week stint in Melbourne, conducting 2 Data ONTAP classes and had a blast.

But after almost 3 1/2 months of doing little except teaching NetApp classes, the stint is ending. I wanted it that way, to take a break and also to take on a new challenge. I will be taking on a job with Hitachi Data Systems, going back to the industry that I have termed the “Wild, wild west”. After a 4 1/2-year hiatus, I think that industry still behaves the way it is .. brash, exclusive, rich! The oligarchy of the oilmen are still laughing their way to the banks. And it will be my job to sell storage (and cloud) solutions to them.

In my Netapp (and EMC) engagements in the past 6 months, I have seen the greater adoption of iSCSI over Fibre Channel, and many has predicted that 10Gigabit Ethernet will be the infliction point where iSCSI can finally stand shoulder-to-shoulder with Fibre Channel. After all, 10 Gigabit/sec is definitely faster than 8 Gigabit/sec Fibre Channel, right? WRONG! (I am perfectly aware there is a 16 Gigabit/sec Fibre Channel, but can’t you see I am trying to start an argument here?)

Delivering SCSI data load over iSCSI on 10 Gigabit/sec Ethernet does not necessarily mean that it would be faster than delivering the same payload over 8 Gigabit/sec Fibre Channel. This statement can be viewed in many different ways and hence the favourite IT reply would be … “It depends“.

I would leave this performance argument for another day but today we are going to talk about some of the key additions to supercharge 10 Gigabit Ethernet for data delivery in storage networking capacity. In addition, 10 Gigabit Ethernet is the primary transport for Fibre Channel over Ethernet (FCoE) and it is absolutely critical that 10 Gigabit Ethernet must be close to as reliable as Fibre Channel for data delivery in a storage network.

Ethernet is a non-deterministic protocol, and therefore, its delivery result is dependent on many factors. Likewise 10 Gigabit Ethernet has inherited part of that feature. The delivery of data over Ethernet can be lossy, i.e. packets can get lost and the upper layer application protocols will have to respond to detecte the dropped packets and to ensure lost packets are redelivered to complete the consignment. But delivering data in a storage network cannot be lossy and in most cases of SANs, the requirement is to have the data arrive in the sequence they were delivered. The SAN fabric (especially with the common services of Layer 3 of the FC protocol stack) and the deterministic nature of Fibre Channel protocol were the reasons many has relied on Fibre Channel SAN technology for more than a decade. How can 10 Gigabit Ethernet respond?

Continue reading