The power of E8

[Preamble: I was a delegate of Storage Field Day 14 from Nov 8-10, 2017. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented at this event. The content of this blog is of my own opinions and views]

E8 Storage technology update at Storage Field Day 14 was impressive. Out of the several next generation NVMe storage technologies I have explored so far, E8 came out as the most complete. It was no surprise that they won the “Best of Show” in the Flash Memory Summits for the “Most Innovative Flash Memory Technology” in 2016 and “Most Innovative Flash Memory Enterprise Business Application” for 2017.

Who is E8 Storage?

They came out of stealth in August 2016 and have been making waves with very impressive stats. When E8 was announced, their numbers were more than 10 million IOPS, with 100µsecs for reads and 40µsecs for writes. And in the SFD14 demo, they reached and past the 10 million IOPS numbers.

The design philosophy of E8 Storage is different than the traditional dual controller scale-up storage architecture design or the multi-node scale-out cluster design. In fact, from a 30,000 feet view, it is quite similar to a “SAN-client” design advocated by Lustre, leveraging a very high throughput, low latency network.

Continue reading

The rise of RDMA

I have known of RDMA (Remote Direct Memory Access) for quite some time, but never in depth. But since my contract work ended last week, and I have some time off to do some personal development, I decided to look deeper into RDMA. Why RDMA?

In the past 1 year or so, RDMA has been appearing in my radar very frequently, and rightly so. The speedy development and adoption of NVMe (Non-Volatile Memory Express) have pushed All Flash Arrays into the next level. This pushes the I/O and the throughput performance bottlenecks away from the NVMe storage medium into the legacy world of SCSI.

Most network storage interfaces and protocols like SAS, SATA, iSCSI, Fibre Channel today still carry SCSI loads and would have to translate between NVMe and SCSI. NVMe-to-SCSI bridges have to be present to facilitate the translation.

In the slide below, shared at the Flash Memory Summit, there were numerous red boxes which laid out the SCSI connections and interfaces where SCSI-to-NVMe translation (and vice versa) would be required.

Continue reading