Computational Storage embodies Data Velocity and Locality

I have been earnestly observing the growth of Computational Storage for a number of years now.  It was known by several previous names, with the name “in-situ data processing” stuck with me the most. The Computational Storage nomenclature became more cohesive when SNIA® put together the CMSI (Compute Memory Storage Initiative) some time back. This initiative is where several standards bodies, the major technology players and several SIGs (special interest groups) in SNIA® collaborated to advance Computational Storage segment in the storage technology industry we know of today.

The use cases for Computational Storage are burgeoning, and the functional implementations of Computational Storage are becoming vital to tackle the explosive data tsunami. In 2018 IDC, in its Worldwide Global Datasphere Forecast 2021-2025 report, predicted that the world will have 175 ZB (zettabytes) of data. That number, according to hearsay, has been revised to a heady figure of 250ZB, given the superlative rate data is being originated, spawned and more.

Computational Storage driving factors

If we take the Computer Science definition of in-situ processing, Computational Storage can be distilled as processing data where it resides. In a nutshell, “Bring Compute closer to Storage“. This means that there is a processing unit within the storage subsystem which does not require the host CPU to perform processing. In a very simplistic manner, a RAID card in a storage array can be considered a Computational Storage device because it performs the RAID functions instead of the host CPU. But this new generation of Computational Storage has much more prowess than just the RAID function in a RAID card.

There are many factors in Computational Storage that make a lot sense. Here are a few:

  1. Voluminous data inundate the centralized architecture of the cloud platforms and the enterprise systems today. Much of the data come from end point devices – mobile devices, sensors, IoT, point-of-sales, video cameras, et.al. Pre-processing the data at the origin data points can help filter the data, reduce the size to be processed centrally, and secure the data before they are ingested into the central data processing systems
  2. Real-time processing of the data at the moment the data is received gives the opportunity to create the Velocity of Data Analytics. Much of the data do not need to move to a central data processing system for analysis. Often in use cases like autonomous vehicles, fraud detection, recommendation systems, disaster alerts etc require near instantaneous responses. Performing early data analytics at the data origin point has tremendous advantages.
  3. Moore’s Law is waning. The CPU (central processing unit) is no longer the center of the universe. We are beginning to see CPU offloading technologies to augment the CPU’s duties such as compression, encryption, transcoding and more. SmartNICs, DPUs (data processing units), VPUs (visual processing units), GPUs (graphics processing units), etc have come forth to formulate a new computing paradigm.
  4. Freeing up central resources with Computational Storage also accelerates the overall distributed data processing in the whole data architecture. The CPU and the adjoining memory subsystem are less required to perform context switching caused by I/O interrupts as in most of the compute/storage architecture today. The total effect relieves the CPU and giving back more CPU cycles to perform higher processing tasks, resulting in faster performance overall.
  5. The rise of memory interconnects is enabling a more distributed computing fabric of data processing subsystems. The rising CXL (Compute Express Link™) interconnect protocol, especially after the Gen-Z annex, has emerged a force to be reckoned with. This rise of memory interconnects will likely strengthen the testimony of Computational Storage in the fast approaching future.

Computational Storage Deployment Models

SNIA Computational Storage Universe in 2019

Continue reading

IT Data practices and policies still wanting

There is an apt and honest editorial cartoon about Change.

From https://commons.wikimedia.org/wiki/File:Who-Wants-Change-Crowd-Change-Management-Yellow.png

I was a guest of Channel Asia Executive Roundtable last week. I joined several luminaries in South East Asia to discuss about the topic of “How Partners can bring value to the businesses to manage their remote workforce“.

Covid-19 decimated what we knew as work in general. The world had to pivot and now, 2+ years later, a hybrid workforce has emerged. The mixture of remote work, work-from-home (WFH), physical office and everywhere else has brought up a new mindset and new attitudes with both the employers and their staff alike. Without a doubt, the remote way of working is here to stay.

People won but did the process lose?

The knee jerk reactions when the lockdowns of Covid hit were to switch work to remote access to applications on premises or in the clouds. Many companies have already moved to the software-as-a-service (SaaS) way of working but not all have made the jump, just like not all the companies’ applications were SaaS based. Of course, the first thing these stranded companies do was to look for the technologies to solve this unforeseen disorder.

People Process Technology.
Picture from https://iconstruct.com/blog/people-process-technology/

Continue reading

Please cultivate 3-2-1 and A-B-C of Data Management

My Sunday morning was muddled 2 weeks ago. There was a frenetic call from someone whom I knew a while back and he needed some advice. Turned out that his company’s files were encrypted and the “backups” (more on this later) were gone. With some detective work, I found that their files were stored in a Synology® NAS, often accessed via QuickConnect remotely, and “backed up” to Microsoft® Azure. I put “Backup” in inverted commas because their definition of “backup” was using Synology®’s Cloud Sync to Azure. It is not a true backup but a file synchronization service that often mislabeled as a data protection backup service.

All of his company’s projects files were encrypted and there were no backups to recover from. It was a typical ransomware cluster F crime scene.

I would have gloated because many of small medium businesses like his take a very poor and lackadaisical attitude towards good data management practices. No use crying over spilled milk when prevention is better than cure. But instead of investing early in the prevention, the cure would likely be 3x more expensive. And in this case, he wanted to use Deloitte® recovery services, which I did not know existed. Good luck with the recovery was all I said to him after my Sunday morning was made topsy turvy of sorts.

NAS is the ransomware goldmine

I have said it before and I am saying it again. NAS devices, especially the consumer and prosumer brands, are easy pickings because there was little attention paid to implement a good data management practice either by the respective vendor or the end users themselves. 2 years ago I was already seeing a consistent pattern of the heightened ransomware attacks on NAS devices, especially the NAS devices that proliferated the small medium businesses market segment.

The WFH (work from home) practice trigged by the Covid-19 pandemic has made NAS devices essential for businesses. NAS are the workhorses of many businesses after all.  The ease of connecting from anywhere with features similar to the Synology® QuickConnect I mentioned earlier, or through VPNs (virtual private networks), or a self created port forwarding (for those who wants to save a quick buck [ sarcasm ]), opened the doors to bad actors and easy ransomware incursions. Good data management practices are often sidestepped or ignored in exchange for simplicity, convenience, and trying to save foolish dollars. Until ….

Continue reading

Time to Conflate Storage with Data Services

Around the year 2016, I started to put together a better structure to explain storage infrastructure. I started using the word Data Services Platform before what it is today. And I formed a pictorial scaffold to depict what I wanted to share. This was what I made at that time.

Data Services Platform (circa 2016)- Copyright Heoh Chin Fah

One of the reasons I am bringing this up again is many of the end users and resellers still look at storage from the perspective of capacity, performance and price. And as if two plus two equals five, many storage pre-sales and architects reciprocate with the same type of responses that led to the deteriorated views of the storage technology infrastructure industry as a whole. This situation irks me. A lot.

Continue reading

Rethinking data processing frameworks systems in real time

“Row, row, row your boat, gently down the stream…”

Except the stream isn’t gentle at all in the data processing’s new context.

For many of us in the storage infrastructure and data management world, the well known framework is storing and retrieve data from a storage media. That media could be a disk-based storage array, a tape, or some cloud storage where the storage media is abstracted from the users and the applications. The model of post processing the data after the data has safely and persistently stored on that media is a well understood and a mature one. Users, applications and workloads (A&W) process this data in its resting phase, retrieve it, work on it, and write it back to the resting phase again.

There is another model of data processing that has been bubbling over the years and now reaching a boiling point. Still it has not reached its apex yet. This is processing the data in flight, while it is still flowing as it passes through processing engine. The nature of this kind of data is described in one 2018 conference I chanced upon a year ago.

letgo marketplace processing numbers in 2018

  • * NRT = near real time

From a storage technology infrastructure perspective, this kind of data processing piqued my curiosity immensely. And I have been studying this burgeoning new data processing model in my spare time, and where it fits, bringing the understanding back into the storage infrastructure and data management side.

Continue reading

How well do you know your data and the storage platform that processes the data

Last week was consumed by many conversations on this topic. I was quite jaded, really. Unfortunately many still take a very simplistic view of all the storage technology, or should I say over-marketing of the storage technology. So much so that the end users make incredible assumptions of the benefits of a storage array or software defined storage platform or even cloud storage. And too often caveats of turning on a feature and tuning a configuration to the max are discarded or neglected. Regards for good storage and data management best practices? What’s that?

I share some of my thoughts handling conversations like these and try to set the right expectations rather than overhype a feature or a function in the data storage services.

Complex data networks and the storage services that serve it

I/O Characteristics

Applications and workloads (A&W) read and write from the data storage services platforms. These could be local DAS (direct access storage), network storage arrays in SAN and NAS, and now objects, or from cloud storage services. Regardless of structured or unstructured data, different A&Ws have different behavioural I/O patterns in accessing data from storage. Therefore storage has to be configured at best to match these patterns, so that it can perform optimally for these A&Ws. Without going into deep details, here are a few to think about:

  • Random and Sequential patterns
  • Block sizes of these A&Ws ranging from typically 4K to 1024K.
  • Causal effects of synchronous and asynchronous I/Os to and from the storage

Continue reading

At the mercy of the cloud deity

Amazon Web Services (AWS) went down in the middle of last week. News of the outage were mentioned:

AWS Management Console unavailable error

Piling the misery

The AWS outage headlines attract the naysayers, the fickle armchair pundits, and the opportunists. Here are a few news articles that bring these folks to chastise the cloud giant.

Of course, I am one of these critics. I don’t deny that I am not. But I read this situation from a multicloud hyperbole of which I am not a fan. Too much multicloud whitewashing by vendors trying to pitch multicloud as a disaster recovery solution without understanding that this is easier said than done.

Continue reading

Storage Elephant Compute Birds

Data movement is expensive. Not just costs, but also latency and resources as well. Thus there were many narratives to move compute closer to where the data is stored because moving compute is definitely more economical than moving data. I borrowed the analogy of the 2 animals from some old NetApp® slides which depicted storage as the elephant, and compute as birds. It was the perfect analogy, because the storage is heavy and compute is light.

“Close up of a white Great Egret perching on top of an African Elephant aa Amboseli national park, Kenya”

Before the animals representation came about I used to use the term “Data locality, Data Mobility“, because of past work on storage technology in the Oil & Gas subsurface data management pipeline.

Take stock of your data movement

I had recent conversations with an end user who has been paying a lot of dollars keeping their “backup” and “archive” in AWS Glacier. The S3 storage is cheap enough to hold several petabytes of data for years, because the IT folks said that the data in AWS Glacier are for “backup” and “archive”. I put both words in quotes because they were termed as “backup” and “archive” because of their enterprise practice. However, the face of their business is changing. They are in manufacturing, oil and gas downstream, and the definitions of “backup” and “archive” data has changed.

For one, there is a strong demand for reusing the past data for various reasons and these datasets have to be recalled from their cloud storage. Secondly, their data movement activities still mimicked what they did in the past during their enterprise storage days. It was a classic lift-and-shift when they moved to the cloud, and not taking stock of  their data movements and the operations they ran on these datasets. Still ongoing, their monthly AWS cost a bomb.

Continue reading

Control your Files. Control your Sovereignty.

Data residency, data sovereignty, data localization – the trio of data compliance and governance – have been on my mind a lot lately. I am seeing a disturbing trend. “Splinternet” has taken a hurried and hastened pace. We are now seeing many countries drawing up digital boundaries in the name of data privacy and data protection with sovereign laws and regulations. Besides, these digital demarcation along the lines with data definitions, digital “colonization” is a strong undercurrent as developing countries are accepting larger and more powerful foreign powers into their playpen.

Public cloud services transcend national borders. The breakneck speed in the adoption of public cloud services is causing anxieties and concerns with conservative governments everywhere. On the flip side of the coin, commerce has certainly flourished and bloomed as global wide collaborations bring new opportunities, new markets – all for capitalism and growth.

[ Note: While we are on this debacle, the voices of decentralization are getting louder as well, but that is a topic for another day ]

Where are your data files now?

Continue reading

Right time for Andrew. The Filesystem that is.

I couldn’t hold my excitement when I discovered Auristor® early last week. I stumbled upon this Computerweekly article “Want to side step Public Cloud? Auristor® offers global file storage.” Given the many news not exactly praising the public cloud storage vendors nowadays, the article’s title caught my attention. Immediately Andrew File System (AFS) was there. I was perplexed at first because I have never seen or heard a commercial version of AFS before. This news gave me goosebumps.

For the curious, I am sure many will ask who is this Andrew anyway? What is my relationship with this Andrew?

One time with Andrew

A bit of my history. I recalled quite vividly helping Intel in Penang, Malaysia to implement their globally distributed file caching mechanism with the NetApp® filer’s NFS. It was probably 2001 and I believed Intel wanted to share their engineering computing (EC) files between their US facilities and Intel Penang Design Center (PDC). As I worked along with the Intel folks, I found out that this distributed file caching technology was called Andrew File System (AFS).

Although I couldn’t really recalled how the project went, I remembered it being a bed of bugs at that time. But being the storage geek that I am, I obviously took some time to get to know Andrew the File System. 20 years have gone by, and I never really thought of AFS coming out as a commercial solution or even knew of it as one, until Auristor®,

Auristor Logo

Continue reading