Preliminary Data Taxonomy at ingestion. An opportunity for Computational Storage

Data governance has been on my mind a lot lately. With all the incessant talks and hype about Artificial Intelligence, the true value of AI comes from good data. Therefore, it is vital for any organization embarking on their AI journey to have good quality data. And the journey of the lifecycle of data in an organization starts at the point of ingestion, the data source of how data is either created, acquired to be presented up into the processing workflow and data pipelines for AI training and onwards to AI applications.

In biology, taxonomy is the scientific study and practice of naming, defining and classifying biological organisms based on shared characteristics.

And so, begins my argument of meshing these 3 topics together – data ingestion, data taxonomy and with Computational Storage. Here goes my storage punditry.

Data Taxonomy in post-injection 

I see that data, any data, has to arrive at a repository first before they are given meaning, context, specifications. These requirements are different from file permissions, ownerships, ctime and atime timestamps, the content of the ingested data stream are made to fit into the mould of the repository the data is written to. Metadata about the content of the data gives the data meaning, context and most importantly, value as it is used within the data lifecycle. However, the metadata tagging, and preparing the data in the ETL (extract load transform) or the ELT (extract load transform) process are only applied post-ingestion. This data preparation phase, in which data is enriched with content metadata, tagging, taxonomy and classification, is expensive, in term of resources, time and currency.

Elements of a modern event-driven architecture including data ingestion (Credit: Qlik)

Even in the burgeoning times of open table formats (Apache Iceberg, HUDI, Deltalake, et al), open big data file formats (Avro, Parquet) and open data formats (CSV, XML, JSON et.al), the format specifications with added context and meanings are added in and augmented post-injection.

Continue reading

A Data Management culture to combat Ransomware

On the road, seat belt saves lives. So does the motorcycle helmet. But these 2 technologies alone are probably not well received and well applied daily unless there is a strong ecosystem and culture about road safety. For decades, there have been constant and unrelenting efforts to enforce the habits of putting on the seat belt or the helmet. Statistics have shown they reduce road fatalities, but like I said, it is the safety culture that made all this happen.

On the digital front, the ransomware threats are unabated. In fact, despite organizations (and individuals), both large and small, being more aware of cyber-hygiene practices more than ever, the magnitude of ransomware attacks has multiplied. Threat actors still see weaknesses and gaps, and vulnerabilities in the digital realms, and thus, these are lucrative ventures that compliment the endeavours.

Time to look at Data Management

The Cost-Benefits-Risks Conundrum of Data Management

And I have said this before in the past. At a recent speaking engagement, I brought it up again. I said that ransomware is not a cybersecurity problem. Ransomware is a data management problem. I got blank stares from the crowd.

I get it. It is hard to convince people and companies to embrace a better data management culture. I think about the Cost-Benefits-Risk triangle while I was analyzing the lack of data management culture used in many organizations when combating ransomware.

I get it that Cybersecurity is big business. Even many of the storage guys I know wanted to jump into the cybersecurity bandwagon. Many of the data protection vendors are already mashing their solutions with a cybersecurity twist. That is where the opportunities are, and where the cool kids hang out. I get it.

Cybersecurity technologies are more tangible than data management. I get it when the C-suites like to show off shiny new cybersecurity “toys” because they are allowed to brag. Oh, my company has just implemented security brand XXX, and it’s so cool! They can’t be telling their golf buddies that they have a new data management culture, can they? What’s that?

Continue reading

Reverting the Cloud First mindset

When cloud computing was all the rage, every business wanted to be on-board. Those who resisted felt the heat as the FOMO (fear of missing out) feeling set in, especially those who were doing this thing called “Digital Transformation“. The public cloud service providers took advantage of the cloud computing frenzy, calling for a “Cloud First” strategy. For a number of years, the marketing worked. The cloud first mentality became the tip of the tongue of many, encouraging droves to cloud adoption.

All this was fine and dandy but recently, we are beginning to hear and read about a few high profile cases of cloud repatriation. DHH‘s journal of Basecamp’s exit from AWS in late 2022 reverberated strongly, saying what should be a wake up call for those caught in the Cloud Computing Hotel California’s gilded cage. An even more bizarre claim about cost savings of $400 million over 3 years was made by Ahrefs, a Singapore SEO software maker which chose to use a co-location facility instead of a public cloud service.

Cloud First is not Cool (not sure where is the source is from but I got this off Twitter some months ago)

While these big news jail breaks are going against the grain, most are still in that diaspora to jump into the cloud services everywhere. In droves even. But, on and off, I am beginning to hear some grips, grunts and groans from end users in the cloud. These news have emboldened some to think that there is another choice besides shifting all IT and data services to the cloud.

Continue reading

Object Storage becoming storage lingua franca of Edge-Core-Cloud

Data Fabric was a big buzzword going back several years. I wrote a piece talking about Data Fabric, mostly NetApp®’s,  almost 7 years ago, which I titled “The Transcendence of Data Fabric“. Regardless of storage brands and technology platforms, and each has its own version and interpretations, one thing holds true. There must be a one layer of Data Singularity. But this is easier said than done.

Fast forward to present. The latest buzzword is Edge-to-Core-Cloud or Cloud-to-Core-Edge. The proliferation of Cloud Computing services, has spawned beyond to multiclouds, superclouds and of course, to Edge Computing. Data is reaching to so many premises everywhere, and like water, data has found its way.

Edge-to-Core-to-Cloud (Gratitude thanks to https://www.techtalkthai.com/dell-technologies-opens-iot-solutions-division-and-introduces-distributed-core-architecture/)

The question on my mind is can we have a single storage platform to serve the Edge-to-Core-to-Cloud paradigm? Is there a storage technology which can be the seamless singularity of data? 7+ years onwards since my Data Fabric blog, The answer is obvious. Object Storage.

The ubiquitous object storage and the S3 access protocol

For a storage technology that was initially labeled “cheap and deep”, object storage has become immensely popular with developers, cloud storage providers and is fast becoming storage repositories for data connectors. I wrote a piece called “All the Sources and Sinks going to Object Storage” over a month back, which aptly articulate how far this technology has come.

But unknown to many (Google NASD and little is found), object storage started its presence in SNIA (it was developed in Carnegie-Mellon University prior to that) in the early 90s, then known as NASD (network attached secure disk). As it is made its way into the ANSI T10 INCITS standards development, it became known as Object-based Storage Device or OSD.

The introduction of object storage services 16+ years ago by Amazon Web Services (AWS) via their Simple Storage Services (S3) further strengthened the march of object storage, solidified its status as a top tier storage platform. It was to AWS’ genius to put the REST API over HTTP/HTTPS with its game changing approach to use CRUD (create, retrieve, update, delete) operations to work with object storage. Hence the S3 protocol, which has become the de facto access protocol to object storage.

Yes, I wrote those 2 blogs 11 and 9 years ago respectively because I saw that object storage technology was a natural fit to the burgeoning new world of storage computing. It has since come true many times over.

Continue reading

IT Data practices and policies still wanting

There is an apt and honest editorial cartoon about Change.

From https://commons.wikimedia.org/wiki/File:Who-Wants-Change-Crowd-Change-Management-Yellow.png

I was a guest of Channel Asia Executive Roundtable last week. I joined several luminaries in South East Asia to discuss about the topic of “How Partners can bring value to the businesses to manage their remote workforce“.

Covid-19 decimated what we knew as work in general. The world had to pivot and now, 2+ years later, a hybrid workforce has emerged. The mixture of remote work, work-from-home (WFH), physical office and everywhere else has brought up a new mindset and new attitudes with both the employers and their staff alike. Without a doubt, the remote way of working is here to stay.

People won but did the process lose?

The knee jerk reactions when the lockdowns of Covid hit were to switch work to remote access to applications on premises or in the clouds. Many companies have already moved to the software-as-a-service (SaaS) way of working but not all have made the jump, just like not all the companies’ applications were SaaS based. Of course, the first thing these stranded companies do was to look for the technologies to solve this unforeseen disorder.

People Process Technology.
Picture from https://iconstruct.com/blog/people-process-technology/

Continue reading

A conceptual distributed enterprise HCI with open source software

Cloud computing has changed everything, at least at the infrastructure level. Kubernetes is changing everything as well, at the application level. Enterprises are attracted by tenets of cloud computing and thus, cloud adoption has escalated. But it does not have to be a zero-sum game. Hybrid computing can give enterprises a balanced choice, and they can take advantage of the best of both worlds.

Open Source has changed everything too because organizations now has a choice to balance their costs and expenditures with top enterprise-grade software. The challenge is what can organizations do to put these pieces together using open source software? Integration of open source infrastructure software and applications can be complex and costly.

The next version of HCI

Hyperconverged Infrastructure (HCI) also changed the game. Integration of compute, network and storage became easier, more seamless and less costly when HCI entered the market. Wrapped with a single control plane, the HCI management component can orchestrate VM (virtual machine) resources without much friction. That was HCI 1.0.

But HCI 1.0 was challenged, because several key components of its architecture were based on DAS (direct attached) storage. Scaling storage from a capacity point of view was limited by storage components attached to the HCI architecture. Some storage vendors decided to be creative and created dHCI (disaggregated HCI). If you break down the components one by one, in my opinion, dHCI is just a SAN (storage area network) to HCI. Maybe this should be HCI 1.5.

A new version of an HCI architecture is swimming in as Angelfish

Kubernetes came into the HCI picture in recent years. Without the weights and dependencies of VMs and DAS at the HCI server layer, lightweight containers orchestrated, mostly by, Kubernetes, made distribution of compute easier. From on-premises to cloud and in between, compute resources can easily spun up or down anywhere.

Continue reading

How well do you know your data and the storage platform that processes the data

Last week was consumed by many conversations on this topic. I was quite jaded, really. Unfortunately many still take a very simplistic view of all the storage technology, or should I say over-marketing of the storage technology. So much so that the end users make incredible assumptions of the benefits of a storage array or software defined storage platform or even cloud storage. And too often caveats of turning on a feature and tuning a configuration to the max are discarded or neglected. Regards for good storage and data management best practices? What’s that?

I share some of my thoughts handling conversations like these and try to set the right expectations rather than overhype a feature or a function in the data storage services.

Complex data networks and the storage services that serve it

I/O Characteristics

Applications and workloads (A&W) read and write from the data storage services platforms. These could be local DAS (direct access storage), network storage arrays in SAN and NAS, and now objects, or from cloud storage services. Regardless of structured or unstructured data, different A&Ws have different behavioural I/O patterns in accessing data from storage. Therefore storage has to be configured at best to match these patterns, so that it can perform optimally for these A&Ws. Without going into deep details, here are a few to think about:

  • Random and Sequential patterns
  • Block sizes of these A&Ws ranging from typically 4K to 1024K.
  • Causal effects of synchronous and asynchronous I/Os to and from the storage

Continue reading

The Starbucks model for Storage-as-a-Service

Starbucks™ is not a coffee shop. It purveys beyond coffee and tea, and food and puts together the yuppie beverages experience. The intention is to get the customers to stay as long as they can, and keep purchasing the Starbucks’ smorgasbord of high margin provisions in volume. Wifi, ambience, status, coffee or tea with your name on it (plenty of jokes and meme there), energetic baristas and servers, fancy coffee roasts and beans et. al. All part of the Starbucks™-as-a-Service pleasurable affair that intends to lock the customer in and have them keep coming back.

The Starbucks experience

Data is heavy and they know it

Unlike compute and network infrastructures, storage infrastructures holds data persistently and permanently. Data has to land on a piece of storage medium. Coupled that with the fact that data is heavy, forever growing and data has gravity, you have a perfect recipe for lock-in. All storage purveyors, whether they are on-premises data center enterprise storage or public cloud storage, and in between, there are many, many methods to keep the data chained to a storage technology or a storage service for a long time. The storage-as-a-service is like tying the cow to the stake and keeps on milking it. This business model is very sticky. This stickiness is also a lock-in mechanism.

Continue reading

Open Source Storage Technology Crafters

The conversation often starts with a challenge. “What’s so great about open source storage technology?

For the casual end users of storage systems, regardless of SAN (definitely not Fibre Channel) or NAS on-premises, or getting “files” from the personal cloud storage like Dropbox, OneDrive et al., there is a strong presumption that open source storage technology is cheap and flaky. This is not helped with the diet of consumer brands of NAS in the market, where the price is cheap, but the storage offering with capabilities, reliability and performance are found to be wanting. Thus this notion floats its way to the business and enterprise users, and often ended up with a negative perception of open source storage technology.

Highway Signpost with Open Source wording

Storage Assemblers

Anybody can “build” a storage system with open source storage software. Put the software together with any commodity x86 server, and it can function with the basic storage services. Most open source storage software can do the job pretty well. However, once the completed storage technology is put together, can it do the job well enough to serve a business critical end user? I have plenty of sob stories from end users I have spoken to in these many years in the industry related to so-called “enterprise” storage vendors. I wrote a few blogs in the past that related to these sad situations:

We have such storage offerings rigged with cybersecurity risks and holes too. In a recent Unit 42 report, 250,000 NAS devices are vulnerable and exposed to the public Internet. The brands in question are mentioned in the report.

I would categorize these as storage assemblers.

Continue reading

Don’t go to the Clouds. Come back!

Almost in tandem last week, Nutanix™ and HPE appeared to have made denigrated comments about Cloud First mandates of many organizations today. Nutanix™ took to the annual .NEXT conference to send the message that cloud is wasteful. HPE campaigned against a UK Public Sector “Cloud First” policy.

Cloud First or Cloud Not First

The anti-cloud first messaging sounded a bit funny and hypocritical when both companies have a foot in public clouds, advocating many of their customers in the clouds. So what gives?

That A16Z report

For a numbers of years, many fear criticizing the public cloud services openly. For me, there are the 3 C bombs in public clouds.

  • Costs
  • Complexity
  • Control (lack of it)

Yeah, we would hear of a few mini heart attacks here and there about clouds overcharging customers, and security fallouts. But vendors then who were looking up to the big 3 public clouds as deities, rarely chastise them for the errors. Until recently.

The Cost of Cloud, a Trillion Dollar Paradox” released by revered VC firm Andreessen Horowitz in May 2021 opened up the vocals of several vendors who are now emboldened to make stronger comments about the shortcomings of public cloud services. The report has made it evident that public cloud services are not panacea of all IT woes.

The report has made it evident that public cloud services are not panacea of all IT woes. And looking at the trends, this will only get louder.

Use ours first. We are better

It is pretty obvious that both Nutanix™ and HPE have bigger stakes outside the public cloud IaaS (infrastructure-as-a-service) offerings. It is also pretty obvious that both are not the biggest players in this cloud-first economy. Given their weights in the respective markets, they are leveraging their positions to swing the mindsets to their turf where they can win.

“Use our technology and services. We are better, even though we are also in the public clouds.”

Not a zero sum game

But IT services and IT technologies are not a zero sum game. Both on-premises IT services and complementary public cloud services can co-exist. Both can leverage on each other’s strengths and support each other’s weaknesses, if you know how to blend and assimilate the best of both worlds. Hybrid cloud is the new black.

Gartner Hype Cycle

The IT pendulum swings. Technology hype goes fever pitch. Everyone thinks there is a cure for cancer. Reality sets in. They realize that they were wrong (not completely) or right (not completely). Life goes on. The Gartner® Hype Cycle explains this very well.

The cloud is OK

There are many merits having IT services provisioned in the cloud. Agility, pay-per-use, OPEX, burst traffic, seemingly unlimited resources and so. You can read more about it at Benefits of Cloud Computing: The pros and cons. Even AWS agrees to Three things every business needs from hybrid cloud, perhaps to the chagrin of these naysayers.

I opined that there is no single solution for everything. There is no Best Storage Technology Ever (a snarky post). And so, I believe there is nothing wrong of Nutanix™ and HPE, and maybe others, being hypocritical of their cloud and non-cloud technology offerings. These companies are adjusting and adapting to the changing landscapes of the IT environments, but it is best not to confuse the customers what tactics, strategy and vision are. Inconsistencies in messaging diminishes trust.