NVMe (Non-Volatile Memory Express) is upon us. And in the next 2-3 years, we will see a slew of new storage solutions and technology based on NVMe.
Just a few days ago, The Register released an article “Seventeen hopefuls fight for the NVMe Fabric array crown“, and it was timely. I, for one, cannot be more excited about the development and advancement of NVMe and the upcoming NVMeF (NVMe over Fabrics).
This is it. This is the one that will end the wars of DAS, NAS and SAN and unite the warring factions between server-based SAN (the sexy name differentiating old DAS and new DAS) and the networked storage of SAN and NAS. There will be PEACE.
Remember this?
Nutanix popularized the “No SAN” movement which later led to VMware VSAN and other server-based SAN solutions, hyperconverged techs such as PernixData (acquired by Nutanix), DataCore, EMC ScaleIO and also operated in hyperscalers – the likes of Facebook and Google. The hyperconverged solutions and the server-based SAN lines blurred of storage but still, they are not the usual networked storage architectures of SAN and NAS. I blogged about this, mentioning about how the pendulum has swung back to favour DAS, or to put it more appropriately, server-based SAN. There was always a “Great Divide” between the 2 modes of storage architectures.
However, NVMe and NVMeF, as it evolves, can become the Great Peacemaker and bringing both divides and uniting them into a single storage fabric.
What makes NVMe special? A few features stands out.
I especially like the “doorbell registers“, which are part of the I/O queues. They kind of remind me of the picture below:
The “doorbell registers” are either the head-end or the tail-end of respective ring memory buffer of the Submission Queues and Completion Queues in the I/O delivery micro-channels between the NVMe driver and the NVMe controller.
Each time an I/O entry or command from the host NVMe driver is made to the tail-end of the Submission Queue, there is a DING! This signals the NVMe controller to “pickup” the entry. Once the I/O entry or command has been processed and completed, there is an MSI-X (Message Signal Interrupt) and a DING again! How cool is that?
A picture of the Queues, both Submission and Completion, is shown below:
Picture from https://www.osr.com/nt-insider/2014-issue4/introduction-nvme-technology/
NVMe will have plenty of these queues – 64,000 – to be exact and each with a queue depth of 64,000. Comparing this to present day SATA and the AHCI complex, NVMe is a quantum leap in terms of I/O processing and of course, performance.
Another significant advancement is the NVMe commands. There are only 10 required admin commands for configuration and I/O management and 3 or 6 (depending on what you read) optional commands. The commands include Create I/O Submission Queue, Delete I/O Submission Queue, Create I/O Completion Queue, and Delete I/O Completion Queue. There are also Identify, Abort, Get Log Page, Set Features, Get Features, and Async Event Request. This commands set is so much lighter than the SCSI-infused SAS and SATA commands sets we have right now.
The lighter characteristics of the NVMe commands naturally extends into the network, giving this next generation data fabrics NVMeF, a big leg up to modernize present day SAN and NAS. NVMeF (NVMe over Fabrics) whether over RDMA, InfiniBand, Fibre Channel, iWARP, will glue the delivery of NVMe and data regardless of direct attached, local networks and even wider networks. This will bridge data storage networks into a single seamless data fabric, reducing data silos in the near future.
I have my natural dislike the hyperconverged architectures today. Despite the hot streaks of Nutanix, Maxta, Pivot3, Simplivity and others, they are geared towards a specific workloads. Almost always these vendors share their prowess and strengths with VDI, VSI and databases. I have read too many brochures and went to too many talks. These guys continue to recycle their marketing messages .. over and over again.
I see NVMe and NVMeF having the ability to link up the workloads together in the hyperconverged/server SAN/networked-storage space. Both performance and capacity can now be combined to be in one single fabric architecture.
Finally, with my fingers crossed, world peace .. at least at the storage architectures level.
do you see a need to develop an iSCSI interface for an NVMe storage system? or should one stick to NVMeF only? Why?
Hi Bob
I believe it would be ineffective to continue a SCSI route once the NVMe interface and protocol have taken over. That’s my 2cents
Thanks
/Chin-Fah