Societies in crisis. Data at Fault

The deluge of data is astounding. We get bombarded and attacked by data every single waking minute of our day. And it will get even worse. Our senses will be numbed into submission. In the end, I ask in the sense of it all. Do we need this much information force fed to us at every second of our lives?

We have heard about the societies a decade ago living in the Information Age and now, we have touted the Social Age. TikTok, Youtube, Twitter, Spotify, Facebook, Metaverse(s) and so many more are creating societies that are defined by data, controlled by data and governed by data. Data can be gathered so easily now that it is hard to make sense of what is relevant or what is useful. Even worse, private data, information about the individual is out there either roaming without any security guarding it, or sold like a gutted fish in the market. The bigger “whales” are peddled to the highest bidder. So, to the prudent human being, what will it be?

Whatever the ages we are in, Information or Social, does not matter anymore. Data is used to feed the masses; Data is used to influence the population; Data is the universal tool to shape the societies, droning into submission and ruling them to oblivion.

Societies burn

GIGO the TikTok edition

GIGO is Garbage In Garbage Out. It is an age old adage to folks who have worked with data and storage for a long time. You put in garbage data, you get garbage output results. And if you repeat the garbage in enough times, you would have created a long lasting garbage world. So, imagine now that the data is the garbage that is fed into the targeted society. What will happen next is very obvious. A garbage society.

Continue reading

Computational Storage embodies Data Velocity and Locality

I have been earnestly observing the growth of Computational Storage for a number of years now.  It was known by several previous names, with the name “in-situ data processing” stuck with me the most. The Computational Storage nomenclature became more cohesive when SNIA® put together the CMSI (Compute Memory Storage Initiative) some time back. This initiative is where several standards bodies, the major technology players and several SIGs (special interest groups) in SNIA® collaborated to advance Computational Storage segment in the storage technology industry we know of today.

The use cases for Computational Storage are burgeoning, and the functional implementations of Computational Storage are becoming vital to tackle the explosive data tsunami. In 2018 IDC, in its Worldwide Global Datasphere Forecast 2021-2025 report, predicted that the world will have 175 ZB (zettabytes) of data. That number, according to hearsay, has been revised to a heady figure of 250ZB, given the superlative rate data is being originated, spawned and more.

Computational Storage driving factors

If we take the Computer Science definition of in-situ processing, Computational Storage can be distilled as processing data where it resides. In a nutshell, “Bring Compute closer to Storage“. This means that there is a processing unit within the storage subsystem which does not require the host CPU to perform processing. In a very simplistic manner, a RAID card in a storage array can be considered a Computational Storage device because it performs the RAID functions instead of the host CPU. But this new generation of Computational Storage has much more prowess than just the RAID function in a RAID card.

There are many factors in Computational Storage that make a lot sense. Here are a few:

  1. Voluminous data inundate the centralized architecture of the cloud platforms and the enterprise systems today. Much of the data come from end point devices – mobile devices, sensors, IoT, point-of-sales, video cameras, et.al. Pre-processing the data at the origin data points can help filter the data, reduce the size to be processed centrally, and secure the data before they are ingested into the central data processing systems
  2. Real-time processing of the data at the moment the data is received gives the opportunity to create the Velocity of Data Analytics. Much of the data do not need to move to a central data processing system for analysis. Often in use cases like autonomous vehicles, fraud detection, recommendation systems, disaster alerts etc require near instantaneous responses. Performing early data analytics at the data origin point has tremendous advantages.
  3. Moore’s Law is waning. The CPU (central processing unit) is no longer the center of the universe. We are beginning to see CPU offloading technologies to augment the CPU’s duties such as compression, encryption, transcoding and more. SmartNICs, DPUs (data processing units), VPUs (visual processing units), GPUs (graphics processing units), etc have come forth to formulate a new computing paradigm.
  4. Freeing up central resources with Computational Storage also accelerates the overall distributed data processing in the whole data architecture. The CPU and the adjoining memory subsystem are less required to perform context switching caused by I/O interrupts as in most of the compute/storage architecture today. The total effect relieves the CPU and giving back more CPU cycles to perform higher processing tasks, resulting in faster performance overall.
  5. The rise of memory interconnects is enabling a more distributed computing fabric of data processing subsystems. The rising CXL (Compute Express Link™) interconnect protocol, especially after the Gen-Z annex, has emerged a force to be reckoned with. This rise of memory interconnects will likely strengthen the testimony of Computational Storage in the fast approaching future.

Computational Storage Deployment Models

SNIA Computational Storage Universe in 2019

Continue reading

Memory cloud reality soon?

The original SAN was not always Storage Area Network. SAN had a twin nomenclature called System Area Network (SAN) back in the late 90s. Fibre Channel fabric topology (THE Storage Area Network) was only starting to take off when many of the Fibre Channel topologies at the time were either FC-AL (Fibre Channel Arbitrated Loop) or Point-to-Point. So, for a while SAN was System Area Network, or at least that was what Microsoft® wanted it to be. That SAN obviously did not take off.

System Area Network (architecture shown below) presented a high speed network where server clusters can communicate. The communication protocol of choice was VIA (Virtual Interface Adapter), and the proposed applications, notably the Microsoft® SQL Server, would use Winsock API to interface with the network services. Cache coherency in the combined memory resources of a clustered network is often the technology to ensure data synchronization, consistency and integrity.

Alas, System Area Network did not truly take off, and now it is pretty much deprecated from the Microsoft® universe.

System Area Network (SAN)

Continue reading

Scaling new HPC with Composable Architecture

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. Tech Field Day Extra was an included activity as part of the Dell Technologies World. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Deep Learning, Neural Networks, Machine Learning and subsequently Artificial Intelligence (AI) are the new generation of applications and workloads to the commercial HPC systems. Different from the traditional, more scientific and engineering HPC workloads, I have written about the new dawn of supercomputing and the attractive posture of commercial HPC.

Don’t be idle

From the business perspective, the investment of HPC systems is high most of the time, and justifying it to the executives and the investors is not easy. Therefore, it is critical to keep feeding the HPC systems and significantly minimize the idle times for compute, GPUs, network and storage.

However, almost all HPC systems today are inflexible. Once assigned to a project, the resources pretty much stay with the project, even when the workload processing of the project is idle and waiting. Of course, we have to bear in mind that not all resources are fully abstracted, virtualized and software-defined whereby you can carve out pieces of the hardware and deliver a percentage of that resource. Case in point is the CPU, where you cannot assign certain clock cycles of CPU to one project and another half to the other. The technology isn’t there yet. Certain resources like GPU is going down the path of Virtual GPU, and into the realm of resource disaggregation. Eventually, all resources of the HPC systems – CPU, memory, FPGA, GPU, PCIe channels, NVMe paths, IOPS, bandwidth, burst buffers etc – should be disaggregated and pooled for disparate applications and workloads based on demands of usage, time and performance.

Hence we are beginning to see the disaggregated HPC systems resources composed and built up the meet the diverse mix and needs of HPC applications and workloads. This is even more acute when a AI project might grow cold, but the training of AL/ML/DL workloads continues to stay hot

Liqid the early leader in Composable Architecture

Continue reading