The original SAN was not always Storage Area Network. SAN had a twin nomenclature called System Area Network (SAN) back in the late 90s. Fibre Channel fabric topology (THE Storage Area Network) was only starting to take off when many of the Fibre Channel topologies at the time were either FC-AL (Fibre Channel Arbitrated Loop) or Point-to-Point. So, for a while SAN was System Area Network, or at least that was what Microsoft® wanted it to be. That SAN obviously did not take off.
System Area Network (architecture shown below) presented a high speed network where server clusters can communicate. The communication protocol of choice was VIA (Virtual Interface Adapter), and the proposed applications, notably the Microsoft® SQL Server, would use Winsock API to interface with the network services. Cache coherency in the combined memory resources of a clustered network is often the technology to ensure data synchronization, consistency and integrity.
Alas, System Area Network did not truly take off, and now it is pretty much deprecated from the Microsoft® universe.
Ahead of its time
9 years ago I blogged Memory as the last bastion of IT. In the blog I wrote:
As the Cloud Computing matures, memory is going to THE component that defines the next leap forward, which is to make the Cloud work like one giant computer. Extending the memory and incorporating memory both on-premises, on the host side as well as memory in the cloud, into a fast, low latency memory pool would complete the holy grail of Cloud Computing as one giant computer
My prediction at that time came to being because it was about a small company called RNA Networks (acquired by Dell®) that wanted to “Pool all the physical memory of all servers into a single, cohesive and integrated memory pool and every application of each of the server can use the “extended” memory in an instance, without some sort of clustering software or parallel database“.
The conceptual diagram was the Memory Cloud espoused by RNA Networks.
Memory Cloud never really happened in a big way back for the enterprise storage and the cloud back then. Perhaps it was ahead of its time.
The technologies are ready now
Fast forward 2 decades to present day. I am beginning to see some interesting technologies that have the communication speeds and low latencies to address the cache coherence challenge. Overcoming this would lead to a breakthrough for Memory Cloud.
When it comes to Persistent Memory, I am writing from an Intel® Optane™ DC Persistent Memory perspective, specifically Memory Mode. In Memory Mode, the Intel® Optane™ acts as a second tier cache for the DRAM, but with a larger memory resource footprint than the DRAM albeit a higher latency compared to the DRAM. The cache management is handled by the Intel® processor’s memory controller as shown below:
The Far Memory shown in the diagram above refers to the Intel® Optane™ DC Persistent Memory operating in Memory Mode. OK, fine. It is a misnomer to call the Optane™ in Memory Mode to be persistent because it is not. It is still volatile memory.
Intel® Optane™ in Memory Mode presents a much larger memory resource available to the host processor and the connecting devices.
Compute Express Link™
CXL is a well supported, ultra high speed and ultra low latency Cache Coherent Interconnect industry standard based on PCIe 5 and beyond. It allows processors’ memory (think L1/L2 cache) to interconnect and share memory synchronization (cache coherence) with the memory resources of other devices such as GPUs, DPUs, computational storage, FPGAs, accelerators and more. This enables high speed cache resource sharing and offloads among host processors and other end point devices in the system.
CXL version 1.1 will kickoff the initial technology entry with things like I/O initiation and structuring of the caches, resource discovery, direct memory access operations and so on. But it is CXL version 2.0 that spins up the true capability of the Memory Cloud concept. The emergence of the CXL switched fabric in 2.0 solidify the foundation and the network fabric of a Memory Cloud.
Here is a look at CXL version 2.0 architecture.
There are other “almost similar” low latency open systems memory interconnects in the industry today like Gen-Z™, CCIX®, and OpenCAPI™ but none has the blast off fuss and momentum like CXL right now. Gen-Z™ and CXL have a collaboration MoU (memorandum of understanding) to promote interoperability between both technologies.
The Intel® Factor
Coincidentally both were conceived at Intel® and are driven strongly by Intel®. Despite the many malignant news of Intel® of recent years, Intel®, to me, remains a formidable force in datacenter technologies and will continue to remain so. I am indignant of these reports about Intel® but they have also made many missteps that have placed them in the position they are in today.
In conclusion, combining both the Persistent Memory (in Memory Mode) and CXL could become the key enabler in creating a Memory Cloud future. The Intel® factor is important to drive the ambitions of a Memory Cloud in the data centers and in the clouds. The reality of a Memory Cloud cannot be a realized without the Intel® driving force.
The next 5 years
I see an exciting future for Memory Cloud. Having been a technology industry observer, and occasional pundit, for a long time, I really want to see the true Memory Cloud come to fruition. Both the enterprise and the cloud industry have been going through transformational changes in the past 5 years but these changes are merely evolutional substitute of what have been progressing thus far.
With the advent of something like a Memory Cloud, I am pretty sure we will see revolutionary leaps and bounds. It will be game-changing. The next 5 years are going to be exhilarating if you are up for the warp speed ride.