AI and the Data Factory

When I first heard of the word “AI Factory”, the world was blaring Jensen Huang‘s keynote at NVIDIA GTC24. I thought those were cool words, since he mentioned about the raw material of water going into the factory to produce electricity. The analogy was spot on for the AI we are building.

As I engage with many DDN partners and end users in the region, week in, week out, the “AI Factory” word keeps popping into conversations. Yet, many still do not know how to go about building this “AI Factory”. They only know they need to buy GPUs, lots of them. These companies’ AI ambitions are unabated. And IDC predicts that worldwide spending on AI will double by 2028, and yet, the ROI (returns on investment) remains elusive.

At the ground level, based on many conversations so far, the common theme is, the steps to begin building the AI Factory are ambiguous and fuzzy to most. I like to share my views from a data storage point of view. Hence, my take on the Data Factory for AI.

Are you AI-ready?

We have to have a plan but before we take the first step, we must look at where we are standing at the present moment. We know that to train AI, the proverbial step is, we need lots of data. Deep Learning (DL) works with Large Language Models (LLMs), and Generative AI (GenAI), needs tons of data.

If the company knows where they are, they will know which phase is next. So, in the AI Maturity Model (I simplified the diagram below), where is your company now? Are you AI-ready?

Simplified AI Maturity Model

Get the Data Strategy Right

In his interview with CRN, MinIO’s CEO AB Periasamy quoted “For generative AI, they realized that buying more GPUs without a coherent data strategy meant GPUs are going to idle out”. I was struck by his wisdom about having a coherent data strategy because that is absolutely true. This is my starting point. Having the Right Data Strategy.

In the AI world, from a data storage guy, data is the fuel. Data is the raw material that Jensen alluded to, if it was obvious. We have heard this anecdotal quote many times before, even before the AI phenomenon took over. AI is data-driven. Data is vital for the ROI of AI projects. And thus, we must look from the point of the data to make the AI Factory successful.

Continue reading

What next after Cyber Resiliency?

There was a time some years ago when some storage vendors, especially the object storage ones, started calling themselves the “last line of defence”. And even further back, when the purpose-built backup appliances (PBBAs) first appeared, a very smart friend of mine commented that they shouldn’t call it “backup appliance”, but rather they should call it “restore appliance”. That was because the data restoration part, or to be more relevant in today’s context, data recovery is the key to a crucial line of defence against cybersecurity threats to data, especially ransomware. We have a saying in the industry. “Hundreds of good backups are not as good as one good restore.” Of course, this data restoration part has become more sophisticated in the data recovery processes.

In recent years, we also seen the amalgamation of both data protection species – the backup/restore side and the cybersecurity side – giving rise to the term and the proliferation of Cyber Resilience.

Dialing Cyber Resilience (Picture from tehtris.com)

I have no qualms or lack of confidence of the cyber resilience technologies. I am pretty sure they can do the job extremely well, so much so, that some give million dollars guarantees if ever their solution failed. Druva announced their Data Resiliency Guarantee of USD$10 million and Rubrik has their Ransomware Recovery Warranty.

Of course, these warranties and guarantees come with terms and conditions, and caveats and not everyone is besotted by these big numbers’ payout. My friend, Andrew Martin, wrote a tongue-in-cheek piece last year about Rubrik’s warranty guarantee in his Data Storage Asia blog last year, which discussed whether it was Rubrik’s genuineness or spuriousness that might win or lose customers’ affections. You should read his blog to decide.

Continue reading

The All-Important Storage Appliance Mindset for HPC and AI projects

I am strong believer of using the right tool to do the job right. I have said this before 2 years ago, in my blog “Stating the case for a Storage Appliance approach“. It was written when I was previously working for an open source storage company. And I am an advocate of the crafter versus assembler mindset, especially in the enterprise and high- performance storage technology segments.

I have joined DDN. Even with DDN that same mindset does not change a bit. I have been saying all along that the storage appliance model should always be the mindset for the businesses’ peace-of-mind.

My view of the storage appliance model began almost 25 years. I came into NAS systems world via Sun Microsystems®. Sun was famous for running NFS servers on general Sun Solaris servers. NFS services on Unix systems. Back then, I remember arguing with one of the Sun distributors about the tenets of running NFS over 100Mbit/sec Ethernet on Sun servers. I was drinking Sun’s Kool-Aid big time.

When I joined Network Appliance® (now NetApp®) in 2000, my worldview of putting software on general purpose servers changed. Network Appliance®, had one product family, the FAS700 (720, 740, 760) family. All NetApp® did was to serve NFS services in the beginning. They were the NAS filers and nothing else.

I was completed sold on the appliance way with NetApp®. Firstly, it was my very first time knowing such network storage services could be provisioned with an appliance concept. This was different from Sun. I was used to managing NFS exports on a Sun SPARCstation 20 to Unix clients in the network.

Secondly, my mindset began to shape that “you have to have the right tool to the job correctly and extremely well“. Well, the toaster toasts bread very well and nothing else. And the fridge (an analogy used by Dave Hitz, I think) does what it does very well too. That is what the appliance does. You definitely cannot grill a steak with a bread toaster, just like you can’t run an excellent, ultra-high performance storage services to serve the demanding AI and HPC applications on a general server platform. You have to have a storage appliance solution for High-Speed Storage.

That little Network Appliance® toaster award given out to exemplary employees stood vividly in my mind. The NetApp® tagline back then was “Fast, Simple, Reliable”. That solidifies my mindset for the high-speed storage in AI and HPC projects in present times.

DDN AI400X2 Turbo Appliance

Costs Benefits and Risks

I like to think about what the end users are thinking about. There are investments costs involved, and along with it, risks to the investments as well as their benefits. Let’s just simplify and lump them into Cost-Benefits-Risk analysis triangle. These variables come into play in the decision making of AI and HPC projects.

Continue reading

Deploying a MinIO SNMD Object Storage Server in TrueNAS SCALE

[ Preamble ] This deployment of MinIO SNMD (single node multi drive) object storage server on TrueNAS® SCALE 24.04 (codename “Dragonfish”) is experimental. I am just deploying this in my home lab for the fun of it. Do not deploy in any production environment.

I have been contemplating this for quite a while. Which MinIO deployment mode on TrueNAS® SCALE should I work on? For one, there are 3 modes – Standalone, SNMD (Single Node Multi Drives) and MNMD (Multi Node Multi Drives). Of course, the ideal lab experiment is MNMD deployment, the MinIO cluster, and I am still experimenting this on my meagre lab resources.

In the end, I decided to implement SNMD since this is, most likely, deployed on top of a TrueNAS® SCALE storage appliance instead an x-86 bare-metal or in a Kubernetes cluster on Linux systems. Incidentally, the concept of MNMD on top of TrueNAS® SCALE is “Kubernetes cluster”-like albeit a different container platform. At the same time, if this is deployed in a TrueNAS® SCALE Enterprise, a dual-controller TrueNAS® storage appliance, it will take care of the “MinIO nodes” availability in its active-passive HA architecture of the appliance. Otherwise, it can be a full MinIO cluster spread and distributed across several TrueNAS storage appliances (minimum 4 nodes in a 2+2 erasure set) in an MNMD deployment scheme.

Ideally, the MNMD deployment should look like this:

MinIO distributed multi-node cluster architecture (credit: MinIO)

Continue reading

Making Immutability the key factor in a Resilient Data Protection strategy

We often hear “Cyber Resilience” word thrown around these days. Every backup vendor has a cybersecurity play nowadays. Many have morphed into cyber resilience warrior vendors, and there is a great amount of validation in terms of Cyber Resilience in a data protection world. Don’t believe me?

Check out this Tech Field Day podcast video from a month ago, where my friends, Tom Hollingsworth and Max Mortillaro discussed the topic meticulously with Krista Macomber, who has just become the Research Director for Cybersecurity at The Futurum Group (Congrats, Krista!).

Cyber Resilience, as well articulated in the video, is not old wine in a new bottle. The data protection landscape has changed significantly since the emergence of cyber threats and ransomware that it warrants the coining of the Cyber Resilience terminology.

But I want to talk about one very important cog in the data protection strategy, of which cyber resilience is part of. That is Immutability, because it is super important to always consider immutable backups as part of that strategy.

It is no longer 3-2-1 anymore, Toto. 

When it comes to backup, I always start with 3-2-1 backup rule. 3 copies of the data; 2 different media; 1 offsite. This rule has been ingrained in me since the day I entered the industry over 3 decades ago. It is still the most important opening line for a data protection specialist or a solution architect. 3-2-1 is the table stakes.

Yet, over the years, the cybersecurity threat landscape has moved closer and closer to the data protection, backup and recovery realm. This is now a merged super-segment pangea called cyber resilience. With it, the conversation from the 3-2-1 backup rule in these last few years is now evolving into something like 3-2-1-1-0 backup rule, a modern take of the 3-2-1 backup rule. Let’s take a look at the 3-2-1-1-0 rule (simplified by me).

The 3-2-1-1-0 Backup rule (Credit: https://www.dataprise.com/services/disaster-recovery/baas/)

Continue reading

Understanding security practices in File Synchronization

Ho hum. Another day, and another data leak. What else is new?

The latest hullabaloo in my radar was from one of Malaysia’s reverent universities, UiTM, which reported a data leak of 11,891 student applicants’ private details including MyKad (national identity card) numbers of each individual. Reading from the news article, one can deduced that the unsecured link mentioned was probably from a cloud storage service, i.e. file synchronization software such as OneDrive, Google Drive, Dropbox, etc. Those files that can be easily shared via an HTTP/S URL link. Ah, convenience over the data security best practices. 

Cloud File Sync software

It irks me when data security practices are poorly practised. And it is likely that there is ignorance of data security practices in the first place.

It also irks me when many end users everywhere I have encountered tell me their file synchronization software is backup. That is just a very poor excuse of a data protection strategy, if any, especially in enterprise and cloud environments. Convenience, set-and-forget mentality. Out of sight. Out of mind. Right? 

Convenience is not data security. File Sync is NOT Backup

Many users are used to the convenience of file synchronization. The proliferation of cloud storage services with free Gigabytes here and there have created an IT segment based on BYOD, which transformed into EFSS, and now CCP. The buzzword salad involves the Bring-Your-Own-Device, which evolved into Enterprise-File-Sync-&-Share, and in these later years, Content-Collaboration-Platform.

All these are fine and good. The data industry is growing up, and many are leveraging the power of file synchronization technologies, be it on on-premises and from cloud storage services. Organizations, large and small, are able to use these file synchronization platforms to enhance their businesses and digitally transforming their operational efficiencies and practices. But what is sorely missing in embracing the convenience and simplicity is the much ignored cybersecurity housekeeping practices that should be keeping our files and data safe.

Continue reading

Project COSI

The S3 (Simple Storage Service) has become a de facto standard for accessing object storage. Many vendors claim 100% compatibility to S3, but from what I know, several file storage services integration and validation with the S3 have revealed otherwise. There are certain nuances that have derailed some of the more advanced integrations. I shall not reveal the ones that I know of, but let us use this thought as a basis of our discussion for Project COSI in this blog.

Project COSI high level architecture

What is Project COSI?

COSI stands for Container Object Storage Interface. It is still an alpha stage project in Kubernetes version 1.25 as of September 2022 whilst the latest version of Kubernetes today is version 1.26. To understand the objectives COSI, one must understand the journey and the challenges of persistent storage for containers and Kubernetes.

For me at least, there have been arduous arguments of provisioning a storage repository that keeps the data persistent (and permanent) after containers in a Kubernetes pod have stopped, or replicated to another cluster. And for now, many storage vendors in the industry have settled with the CSI (container storage interface) framework when it comes to data persistence using file-based and block-based storage. You can find a long list of CSI drivers here.

However, you would think that since object storage is the most native storage to containers and Kubernetes pods, there is already a consistent way to accessing object storage services. From the objectives set out by Project COSI, turns out that there isn’t a standard way to provision and accessing object storage as compared to the CSI framework for file-based and block-based storage. So the COSI objectives were set to:

  • Kubernetes Native – Use the Kubernetes API to provision, configure and manage buckets
  • Self Service – A clear delineation between administration and operations (DevOps) to enable self-service capability for DevOps personnel
  • Portability – Vendor neutrality enabled through portability across Kubernetes Clusters and across Object Storage vendors

Further details describing Project COSI can be found here at the Kubernetes site titled “Introducing COSI: Object Storage Management using Kubernetes API“.

Standardization equals technology adoption

Standardization means consistency, control, confidence. The higher the standardization across the storage and containerized apps industry, the higher the adoption of the technology. And given what I have heard from the industry over these few years, Kubernetes, to me, even till this day, is a platform and a framework that are filled and riddled with so many moving parts. Many of the components looks the same, feels the same, and sounds the same, but might not work out the same when deployed.

Therefore, the COSI standardization work is important and critical to grow this burgeoning segment, especially when we are rocketing towards disaggregation of computing service units, resources that be orchestrated to scale up or down at the execution of codes. Infrastructure-as-Code (IAC) is becoming a reality more and more with each passing day, and object storage is at the heart of this transformation for Kubernetes and containers.

Continue reading

Open Source on my mind

Last week was cropped with topics around Open Source software. I want to voice my opinions here (with a bit of ranting) and hoping not to rouse many abhorrent comments from different parties and views. This blog is to create conversations, even controversial ones, but we must first agree that there will be disagreements. We must accept disagreements as part of this conversation.

In my 30 years career, Open Source has been a big part of my development and progress. The ideas of freely using (certain) software without any licensing implications and these software being openly available were not always welcomed, as they are now. I think the Open Source revolution has created an innovation movement that is still going strong, and it has not only permeated completely into the IT industry, Open Source has also now in almost every part of the technology-based industries as well. The Open Source influence is massive.

Open Source word cloud

In the beginning

In the beginning, in my beginning in 1992, the availability of software and its source codes was a closed one. Coming from a VAX/VMS background (I was a system admin in my mathematics department’s mini computers), Unix liberated my thinking. The final 6 months in the university was systems programming in C, and it completely changed how I wanted my career to shape. The mantra of “Free as in Freedom” in General Public License GPL (which I got know of much later) boded well with my own tenets in life.

If closed source development models led to proprietary software and a centralized way to distributing software with license, I would count the Open Source development models as one of the earliest decentralized technology frameworks. Down with the capitalistic corporations (aka Evil Empires)!

It was certainly a wonderful and generous way to make the world that it is today. It is a better world now.

Continue reading

Beyond the WORM with MinIO object storage

I find the terminology of WORM (Write Once Read Many) coming back into the IT speak in recent years. In the era of rip and burn, WORM was a natural thing where many of us “youngsters” used to copy files to a blank CD or DVD. I got know about how WORM worked when I learned that the laser in the CD burning process alters the chemical compound in a segment on the plastic disc of the CD, rendering the “burned” segment unwritable once it was written but it could be read many times.

At the enterprise level, I got to know about WORM while working with tape drives and tape libraries in the mid-90s. The objective of WORM is to save and archive the data and files in a non-rewritable format for compliance reasons. And it was the data compliance and data protection parts that got me interested into data management. WORM is a big deal in many heavily regulated industries such as finance and banking, insurance, oil and gas, transportation and more.

Obviously things have changed. WORM, while very much alive in the ageless tape industry, has another up-and-coming medium in Object Storage. The new generation of data infrastructure and data management specialists are starting to take notice.

Worm Storage – Image from Hubstor (https://www.hubstor.net/blog/write-read-many-worm-compliant-storage/)

I take this opportunity to take MinIO object storage for a spin in creating WORM buckets which can be easily architected as data compliance repositories with many applications across regulated industries. Here are some relevant steps.

Continue reading