SMB Witness Protection Program

No, no, FBI is not in the storage business and there are no witnesses to protect.

However, SMB 3.0 has introduced a RPC-based mechanism to inform the clients of any state change in the SMB servers. Microsoft calls it Service Witness Protocol [SWP], and its objective is provide a much faster notification service allow the SMB 3.0 clients to do a failover. In previous SMB 1.0 and even in SMB 2.x, the SMB clients rely on time-out services. The time-out services, either SMB or TCP, could take up as much as 30-45 seconds, and this creates a high latency that is disruptive to enterprise applications.

SMB 3.0, as mentioned in my previous post, had a total revamp, and is now enterprise ready. In what Microsoft calls “Continuously Available” File Service, the SMB 3.0 supports clustered or scale-out file servers. The SMB shares must be shared as “Continuously Available” shares and mapped to SMB 3.0 clients. As shown in the diagram below (provided by SNIA’s webinar),

SMB 3.0 CA Shares

Client A mapping to Server 1 share (\\srv1\CAshr). Client A has a share “handle” that establishes a connection with a corresponding state of the session. The state of the session is synchronously kept consistent with a corresponding state in Server 2.

The Service Witness Protocol is not responsible for the synchronization of the states in the SMB file server cluster. Microsoft has left the HA/cluster/scale-out capability to the proprietary technology method of the NAS vendor. However, SWP regularly observes the status of all services under its watch. Continue reading

SMB on steroids but CIFS lord isn’t pleased

I admit it!

I am one of the guilty parties who continues to use CIFS (Common Internet File System) to represent the Windows file sharing protocol. And a lot of vendors continue to use the “CIFS” word loosely without knowing that it was a something from a bygone era. One of my friends even pronounced it as “See Fist“, which sounded even funnier when he said it. (This is for you Adrian M!)

And we couldn’t be more wrong because we shouldn’t be using the CIFS word anymore. It is so 90’s man! And the tell-tale signs have already been there but most of us chose to ignore it with gusto. But a recent SNIA Webinar titled “SMB 3.0 – New opportunities for Windows Environment” aims to dispel our incompetence and change our CIFS-venture to the correct word – SMB (Server Message Block).

A selfie photo of Dennis Chapman, Senior Technical Director for Microsoft Solutions at NetApp from the SNIA webinar slides above, wants to inform all of us that … SMB History Continue reading

Boosting Solid States beyond SATA

Lately, I have been getting deeper and deeper into low-level implementation related to storage technologies. In my previous blog, I was writing my learning adventure with Priority Flow Control (PFC) and intend to further the Data Center Bridging concepts with future blog entries.

Before I left for Sydney for a holiday last week, I got sidetracked into exciting stuff that’s happening in my daily encounters with friends and new friends. 2 significant storage related technologies fell onto my lap. One is NVMe (Non-Volatile Memory express) and the other FPGA (Field Programmable Gate Array).

While this blog is going to be about NVMe, I actually found FPGA much more exciting to me. Through conversations, I found that there are 2 “biggies” in the FPGA world, and they are designed and manufactured by Xilink and Altera. I admit that I have not done my homework on FPGA yet, having just returned from Sydney last night. I will blog about FPGA in future blogs.

But NVMe is also an important technology direction to the storage world as well.

I think most of us are probably already mesmerized by solid state drives. The bombardment of marketing, presentations, advertising and whatever else the vendors do to promote (and self-promote) solid state drives are inundating the intellectual senses of consumers and enterprises alike. And yet, many vendors do not explain both the pros and cons of integrating solid states into their IT environment. Even worse, many don’t even know the strengths and weaknesses of solid states, hence creating some exaggeration that continues to create a spiral vortex of inaccuracies. Like a self-feeding frenzy, the industry seems to have placed solid state storage as the saviour of the enterprise storage world. Go figure with that!

Continue reading

Supercharging Ethernet … with a PAUSE

It’s been a while since I wrote. I had just finished a 2-week stint in Melbourne, conducting 2 Data ONTAP classes and had a blast.

But after almost 3 1/2 months of doing little except teaching NetApp classes, the stint is ending. I wanted it that way, to take a break and also to take on a new challenge. I will be taking on a job with Hitachi Data Systems, going back to the industry that I have termed the “Wild, wild west”. After a 4 1/2-year hiatus, I think that industry still behaves the way it is .. brash, exclusive, rich! The oligarchy of the oilmen are still laughing their way to the banks. And it will be my job to sell storage (and cloud) solutions to them.

In my Netapp (and EMC) engagements in the past 6 months, I have seen the greater adoption of iSCSI over Fibre Channel, and many has predicted that 10Gigabit Ethernet will be the infliction point where iSCSI can finally stand shoulder-to-shoulder with Fibre Channel. After all, 10 Gigabit/sec is definitely faster than 8 Gigabit/sec Fibre Channel, right? WRONG! (I am perfectly aware there is a 16 Gigabit/sec Fibre Channel, but can’t you see I am trying to start an argument here?)

Delivering SCSI data load over iSCSI on 10 Gigabit/sec Ethernet does not necessarily mean that it would be faster than delivering the same payload over 8 Gigabit/sec Fibre Channel. This statement can be viewed in many different ways and hence the favourite IT reply would be … “It depends“.

I would leave this performance argument for another day but today we are going to talk about some of the key additions to supercharge 10 Gigabit Ethernet for data delivery in storage networking capacity. In addition, 10 Gigabit Ethernet is the primary transport for Fibre Channel over Ethernet (FCoE) and it is absolutely critical that 10 Gigabit Ethernet must be close to as reliable as Fibre Channel for data delivery in a storage network.

Ethernet is a non-deterministic protocol, and therefore, its delivery result is dependent on many factors. Likewise 10 Gigabit Ethernet has inherited part of that feature. The delivery of data over Ethernet can be lossy, i.e. packets can get lost and the upper layer application protocols will have to respond to detecte the dropped packets and to ensure lost packets are redelivered to complete the consignment. But delivering data in a storage network cannot be lossy and in most cases of SANs, the requirement is to have the data arrive in the sequence they were delivered. The SAN fabric (especially with the common services of Layer 3 of the FC protocol stack) and the deterministic nature of Fibre Channel protocol were the reasons many has relied on Fibre Channel SAN technology for more than a decade. How can 10 Gigabit Ethernet respond?

Continue reading

AoE – All about Ethernet!

This is long overdue.

A reader of my blog asked if I could do a piece on Coraid. Coraid who?

This name is probably a name not many people heard of in Malaysia. Even most the storage guys that I talk to never heard of it.

I have known about Coraid for a few years now (thanks to my incessant reading habits), looking at it from nonchalant point of view.  But when the reader asked about Coraid, I contacted Kevin Brown, CEO of Coraid, whom I am not exactly sure how I was connected through LinkedIn. Kevin was very responsive and got one of their Directors to contact me. Kaushik Shirhatti was his name and he was very passionate to share their Coraid technology with me. Thanks Kevin and Kaushik!

That was months ago but the thought of writing this blog post has been lingering. I had to scratch the itch. 😉

So, what’s up with Coraid? I can tell that they are different but seems to me that their entire storage architecture is so simple that it takes a bit of time for even storage guys to wrap their head around it. Why do I say that?

For storage guys (like me), we are used to layers. One of the memorable movie quotes I recalled was from Shrek: “Orges are like onions! Onions have layers!“.

Continue reading

The beginning of the end of FCoE

Never bet against Ethernet!

I am sure many IT experts and practitioners would agree. In the past 30 years or so, Ethernet has fought and won against many so-called would be “Ethernet killers”. The one that stood out for me was ATM (Asynchronous Transfer Mode) because in my past job, I implemented NFS over ATM, running in LANE (LAN Emulation) mode in a NetApp filer setup in Sarawak Shell.

That was more than 10 years ago. And 10 years ago, ATM was hot technology. It was touted as the next generation network technology and supposed to unify the voice, data and network together. ATM also had better framing and QOS (Quality-of-Service) control and offers several modes of traffic shaping and policies. And today, ATM is reduced to a niche telecommunication protocol, and do not participate much in the LAN technology space.

That was the networking space. The storage networking space is dominated by Fibre Channel for almost 15 years. Fibre Channel is a serial technology that replaced the channel-based technology of SCSI in the enterprise. And Fibre Channel has also grown leaps and bounds, dominating the SAN (Storage Area Network) landscape with speeds up to 16Gbit/sec today.

When the networking world and storage networking world collided (I mean combined) with Fibre Channel over Ethernet (FCoE) technology some years back, one has got to give some time soon. Yup, FCoE was really hot 2 years ago, but where is it today? Is Cisco still singing about FCoE like it used to? What about the other storage vendors that used to have at least 1 FCoE slide in their product presentation?

Welcome to the world of IT hypes! FCoE benefits? Ability to carry LAN and SAN traffic with one piece of wire. 10 Gigabit-style, baby!

Continue reading

10Gigabit Ethernet will rule

As far as how the next generation storage networks would look like, 10Gigabit Ethernet (10GbE) is definitely the strongest candidate for the storage network. And this is made possible with key enhancements to Ethernet that has made it possible for greater reliability and performance. This enhancement goes by several names such as Data Center Ethernet (a term coined by Cisco) and Converged Enhanced Ethernet (CEE). But probably the more widely use term is DCB or Data Center Bridging.

Ethernet, so far, has never failed to deliver and as far as I am concerned, Ethernet will rule for the next 10 years or more. Ethernet has evolved several generations from Ethernet running at 10Mbits/sec to FastEthernet, then Gigabit Ethernet and now 10Gigabit Ethernet. Pretty soon, it will be looking at 40Gbits/sec and 100Gbits/sec. It is a tremendous piece of protocol, allowing it to evolve and adapt to the modern data networks.

But before 10GbE, the delivery of packets were of best effort basis. But today’s networks demand scalability, security, performance and most of reliability. However, since the advent of DCB, 10GbE is fortified with these key technologies

  • 10GBASE-T – using Cat 6/6A cabling standards, 10GBASE-T delivers low cost, simple UTP (unshielded twisted pair) networking to the masses
  • iWARP – Support for iWARP is crucial for RDMA (Remote Direct Memory Access). RDMA, in a nutshell, reduces overhead of typical networking buffer-to-buffer copy, by bypassing these bottlenecks, and placing the data blocks and its bits/bytes directly into the access points of the corresponding requesting node.
  • Low latency cut-switching at Layer 2 by reading just the header of the packet instead of the entire full length of the packet. The information contained in the header of the packet is sufficient for it to make a switching/forwarding decision
  • Energy Efficient by introducing low power idle state and other implementations which makes the power consumption usage more proportional to the network utilization rate
  • Congestion notification and pause frame which handles 8 different classes of traffic to ensure lossless network delivery
  • Shortest path adaptive routing protocol for Ethernet forwarding. TRILL (Transparent Interconnections with Lots of Links) is one of the implementation. Lately OpenFlow has been jumping into the bandwagon as a viable option but I need to check out OpenFlow support with 10GbE and DCB.
  • FCoE (Fibre Channel over Ethernet) is all the rage these days and 10GbE has the ability to carry Fibre Channel traffic. This has sparked a initial frenzy among storage vendors.

Of course, last but not least, we are already seeing the sunset of Fibre Channel. While 8Gbps FC has been out for a while, its adoption rate seemed to have stalled. Many vendors and customers are at the 4Gbps range, adopting a wait-and-see game. 16Gbps FC has been in the talks but it seems that all the fireworks are with 10Gigabit Ethernet right now. It will rule …