When I first heard of the word “AI Factory”, the world was blaring Jensen Huang‘s keynote at NVIDIA GTC24. I thought those were cool words, since he mentioned about the raw material of water going into the factory to produce electricity. The analogy was spot on for the AI we are building.
As I engage with many DDN partners and end users in the region, week in, week out, the “AI Factory” word keeps popping into conversations. Yet, many still do not know how to go about building this “AI Factory”. They only know they need to buy GPUs, lots of them. These companies’ AI ambitions are unabated. And IDC predicts that worldwide spending on AI will double by 2028, and yet, the ROI (returns on investment) remains elusive.
At the ground level, based on many conversations so far, the common theme is, the steps to begin building the AI Factory are ambiguous and fuzzy to most. I like to share my views from a data storage point of view. Hence, my take on the Data Factory for AI.
Are you AI-ready?
We have to have a plan but before we take the first step, we must look at where we are standing at the present moment. We know that to train AI, the proverbial step is, we need lots of data. Deep Learning (DL) works with Large Language Models (LLMs), and Generative AI (GenAI), needs tons of data.
If the company knows where they are, they will know which phase is next. So, in the AI Maturity Model (I simplified the diagram below), where is your company now? Are you AI-ready?
Simplified AI Maturity Model
Get the Data Strategy Right
In his interview with CRN, MinIO’s CEO AB Periasamy quoted “For generative AI, they realized that buying more GPUs without a coherent data strategy meant GPUs are going to idle out”. I was struck by his wisdom about having a coherent data strategy because that is absolutely true. This is my starting point. Having the Right Data Strategy.
In the AI world, from a data storage guy, data is the fuel. Data is the raw material that Jensen alluded to, if it was obvious. We have heard this anecdotal quote many times before, even before the AI phenomenon took over. AI is data-driven. Data is vital for the ROI of AI projects. And thus, we must look from the point of the data to make the AI Factory successful.
Towards the middle of the 2000s, I started getting my exposure in Data Governance. This began as I was studying and practising to be certified as an Oracle Certified Professional (OCP) circa 2002-2003. My understanding of the value of data and databases in the storage world, now better known as data infrastructure, grew and expanded quickly. I never gotten my OCP certification because I ran out of money investing in the 5 required classes that included PL/SQL, DBA Admin I and II, and Performance Tuning. My son, Jeffrey was born in 2002, and money was tight.
The sentiment of data governance of most organizations I have engaged with at that time, and over the next course of almost 18 years or so, pre-Covid, the practice of data governance was to comply to some regulatory requirements.
I am strong believer of using the right tool to do the job right. I have said this before 2 years ago, in my blog “Stating the case for a Storage Appliance approach“. It was written when I was previously working for an open source storage company. And I am an advocate of the crafter versus assembler mindset, especially in the enterprise and high- performance storage technology segments.
I have joined DDN. Even with DDN that same mindset does not change a bit. I have been saying all along that the storage appliance model should always be the mindset for the businesses’ peace-of-mind.
My view of the storage appliance model began almost 25 years. I came into NAS systems world via Sun Microsystems®. Sun was famous for running NFS servers on general Sun Solaris servers. NFS services on Unix systems. Back then, I remember arguing with one of the Sun distributors about the tenets of running NFS over 100Mbit/sec Ethernet on Sun servers. I was drinking Sun’s Kool-Aid big time.
When I joined Network Appliance® (now NetApp®) in 2000, my worldview of putting software on general purpose servers changed. Network Appliance®, had one product family, the FAS700 (720, 740, 760) family. All NetApp® did was to serve NFS services in the beginning. They were the NAS filers and nothing else.
I was completed sold on the appliance way with NetApp®. Firstly, it was my very first time knowing such network storage services could be provisioned with an appliance concept. This was different from Sun. I was used to managing NFS exports on a Sun SPARCstation 20 to Unix clients in the network.
Secondly, my mindset began to shape that “you have to have the right tool to the job correctly and extremely well“. Well, the toaster toasts bread very well and nothing else. And the fridge (an analogy used by Dave Hitz, I think) does what it does very well too. That is what the appliance does. You definitely cannot grill a steak with a bread toaster, just like you can’t run an excellent, ultra-high performance storage services to serve the demanding AI and HPC applications on a general server platform. You have to have a storage appliance solution for High-Speed Storage.
That little Network Appliance® toaster award given out to exemplary employees stood vividly in my mind. The NetApp® tagline back then was “Fast, Simple, Reliable”. That solidifies my mindset for the high-speed storage in AI and HPC projects in present times.
I like to think about what the end users are thinking about. There are investments costs involved, and along with it, risks to the investments as well as their benefits. Let’s just simplify and lump them into Cost-Benefits-Risk analysis triangle. These variables come into play in the decision making of AI and HPC projects.
Data governance has been on my mind a lot lately. With all the incessant talks and hype about Artificial Intelligence, the true value of AI comes from good data. Therefore, it is vital for any organization embarking on their AI journey to have good quality data. And the journey of the lifecycle of data in an organization starts at the point of ingestion, the data source of how data is either created, acquired to be presented up into the processing workflow and data pipelines for AI training and onwards to AI applications.
In biology, taxonomy is the scientific study and practice of naming, defining and classifying biological organisms based on shared characteristics.
And so, begins my argument of meshing these 3 topics together – data ingestion, data taxonomy and with Computational Storage. Here goes my storage punditry.
Data Taxonomy in post-injection
I see that data, any data, has to arrive at a repository first before they are given meaning, context, specifications. These requirements are different from file permissions, ownerships, ctime and atime timestamps, the content of the ingested data stream are made to fit into the mould of the repository the data is written to. Metadata about the content of the data gives the data meaning, context and most importantly, value as it is used within the data lifecycle. However, the metadata tagging, and preparing the data in the ETL (extract load transform) or the ELT (extract load transform) process are only applied post-ingestion. This data preparation phase, in which data is enriched with content metadata, tagging, taxonomy and classification, is expensive, in term of resources, time and currency.
Elements of a modern event-driven architecture including data ingestion (Credit: Qlik)
Even in the burgeoning times of open table formats (Apache Iceberg, HUDI, Deltalake, et al), open big data file formats (Avro, Parquet) and open data formats (CSV, XML, JSON et.al), the format specifications with added context and meanings are added in and augmented post-injection.
Last week, there was a press release by Qlik™, informing of a sponsored TechTarget®‘s Enterprise Strategy Group (ESG) about the state of responsible AI practices across industries. The study highlighted critical gaps in the approach to responsible AI, ethical AI practices and AI regulatory compliances. From the study, Qlik™ emphasizes on having a solid data foundation. To get to that bedrock foundation, we must trust the data and we must be responsible for the kinds of data that built that foundation. Hence, Data Trust and Data Responsibility.
There is an AI boom right now. Last year alone, the AI machine and its hype added in USD$2.4 trillion market cap to US tech companies. 5 months into 2024, AI is still supernova hot. And many are very much fixated to the infallible fables and tales of AI’s pompous splendour. It is this blind faith that I see many users and vendors alike sidestepping the realities of AI in the present state as it is.
AI is not always responsible. Then it begs the question, “Are we really working with a responsible set of AI applications and ecosystems“?
I am on a learning streak again. The most prominent technology that keeps landing on my tray at present is, of course, Artificial Intelligence (AI).
AI is hot. Very hot. And overhyped. Everyone is an expert nowadays. Yeah, right. Not me.
Underneath that glossy veneer of the AI hype, there are much going on behind the scenes to make AI great. The 2 areas I have been involved in and practiced for a long time are data infrastructure a.k.a. storage, and data management. And both are playing prominent parts in the advancement of the AI ecosystem. This makes me very excited.
I am no expert, but learning from various sources is already telling me that AI is pushing both storage and data management harder than ever before, much harder than traditional enterprise on-premises use cases and even the cloud computing applications. I ask myself, “where do I start my learning again?” as I journal my process.
Storage performance in a Data Pipeline
Speed of how AI responds is Trust. The faster it is to the accurate and relevant responses will build trust in AI. To get to the speed that we want is not an easy thing, and storage a.k.a. data infrastructure is doing its part. I pick up my learning from understanding the AI pipeline. One early help comes from my friend, Gina Rosenthal, who attended the Solidigm‘s presentation at AI Field Day in February 2024. Her article, titled “Why storage matters for AI – Solidigm“, kickstarted my learning juices again.
I was particularly captured by this slide in Gina’s article. It defines the laborious path data takes to become useful for AI applications.
Storage and the 5 stages of AI Work (reference: Solidigm presentation on AI Field Day)
In the past weekend, I watched a CNA Insider video delving into Data Theft in Malaysia. It is titled “Data Theft in Malaysia: How your personal information may be exploited | Cyber Scammed”.
You can watch the 45-minute video below.
Such dire news is nothing new. We Malaysians are numbed to those telemarketers calling and messaging to offer their credit card services, loans, health spa services. You name it; there is something to sell. Of course, these “services” are mostly innocuous, but in recent years, the forms of scams are risen up several notches and severity levels. The levels of sophistication, the impacts, and the damages (counting financial and human casualties) have rocketed exponentially. Along with the news, mainstream and others, the levels of awareness and interests in data, especially PII (personal identifiable information) in Malaysians, are at its highest yet.
Somewhere, there is a misconception that data processing is cheap. That stems from the well-known pricings of the capacities of public cloud storage that are a fraction of cents per month. But data in storage has to be worked upon, and has to be built up and protected to increase its value. Data has to be processed, moved, shared, and used by applications. Data induce workloads. Nobody keeps data stored forever and never be used again. Nobody buys storage just for capacity alone.
We have a great saying in the industry. No matter, where the data moves, it will land in a storage. So, it is clear that data does not exist in ether. And yet, I often see how little attention and prudence and care, when it comes to data infrastructure and data management technologies, the very components that are foundational to great data.
Great data management for Great AI
AI is driving up costs in data processing
A few recent articles drew my focus into the cost of data processing.
I was listening to several storage luminaries in the GestaltIT’s podcast “No one understands Storage anymore” a few of weeks ago. Around the minute of 11.09 in the podcast, Dr. J. Metz, SNIA® Chair, brought up this is powerful quote “Storage does not mean Capacity“. It struck me, not in a funny way. It is what it is, and it something I wanted to say to many who do not understand the storage solutions they are purchasing. It exemplifies what is wrong in the many organizations today in their understanding of investing in a storage infrastructure project.
This is my pet peeve. The first words uttered in most, if not all storage requirements in my line of work are, “I want this many Terabytes of storage“. There are no other details and context of what the other requirement factors are, such as availability, performance, future growth, etc. Or even the goals to achieve when purchasing a storage system and operating it. What is the improvement they are looking for?What are the problems to solve?
Where is the OKR?
It pains me to say this. For the folks who have in the IT industry for years, both end users and IT purveyors alike, most are absolutely clueless about OKR (Objectives and Key Results) for their storage infrastructure project. Many cannot frame the data challenges they are facing, and they have no idea where to go next. There is no alignment. There is no strategy. Even worse, there is no concept of how their storage infrastructure investments will improve their business and operations.
Just the other day, one company director from a renown IT integrator here in Malaysia came calling. He has been in the IT industry since 1989 (I checked his Linkedin profile), asking to for a 100TB storage quote. I asked a few questions about availability, performance, scalability; the usual questions a regular IT guy would ask. He has no idea, and instead of telling me he didn’t know, he gave me a runaround of this and that. Plenty of yada, yada nonsense.
In the end, I told him to buy a consumer grade storage appliance from Taiwan. I will just let him make a fool of himself in front of his customer since he didn’t want to take accountability of ensuring his customer get a proper enterprise storage solution in good faith. His customer is probably in the same mould as well.
Defensive Strategies as Data Foundations
A strong storage infrastructure foundation is vital for good Data Credibility. If you do the right things for your data, there is Data Value, and it will serve your business well. Both Data Credibility and Data Value create confidence. And Confidence equates Trust.
In order to create the defensive strategies let’s look at storage Availability, Protection, Accessibility, Management Security and Compliance. These are 6 of the 8 data points of the A.P.P.A.R.M.S.C. framework.
Offensive Strategies as Competitive Advantage
Once we have achieved stability of the storage infrastructure foundation, then we can turn over and drive towards storage Performance, Recovery, plus things like Scalability and Agility.
With a strong data infrastructure foundation, the organization can embark on the offensive, and begin their business transformation journey, knowing that their data is well run, protection, and performs.
Alignment with Data and Business Goals
Why are the defensive and offensive strategies requiring alignment to business goals?
The fact is simple. It is about improving the business and operations, and setting OKRs is key to measure the ROI (return of investment) of getting the storage systems and the solutions in place. It is about switching the cost-fearing (negative) mindset to a profit-conviction (positive) mindset.
For example, maybe the availability of the data to the business is poor. Maybe there is the need to have access to the data 24×7, because the business is going online. The simple measurable fact is we can move availability from 95% uptime to 99.99% uptime with an HA storage system.
Perhaps there are concerns about recoverability in the deluge of ransomware threats. Setting new RPO goals from 24 hours to 4 hours is a measurable objective to enhance data resiliency.
Or getting the storage systems to deliver higher performance from 350 IOPS to 5000 IOPS for the database.
What I am saying here is these data points are measurable, and they can serve as checkpoints for business and operational improvements. From a management perspective, these can be used as KPI (key performance index) to define continuous improvement of Data Confidence.
Furthermore, it is easy when a OKR dashboard is used to map the improvement markers when organizations use storage to move from point A to point B, where B equates to a new success milestone. The alignment sets the paths to the business targets.
Storage does not mean only Capacity
The sad part is what the OKRs and the measured goals alignments are glaringly missing in the minds of many organizations purchasing a storage infrastructure and data management solution. The people tasked to source a storage technology solution are not placing a set of goals and objectives. Capacity appears to be the only thing on their mind.
I am about to meet a procurement officer of a customer soon. She asked me this question “Why is your storage so expensive?” over email. I want to change her mindset, just like the many officers and C-levels who hold the purse strings.
Let’s frame the use storage infrastructure in the real world. Nobody buys a storage system just to keep data in there much like a puddle keeps stagnant water. Sooner or later the value of the data in the storage evaporates or the value becomes dull if the data is not used well in any ways, shape or form.
Storage systems and the interconnected pathways from on premises, to the next premises, to the edge and to the clouds serve the greater good for Data. Data is used, shared, shaped, improved, enhanced, protected, moved, and more to deliver Value to the Business.
Storage capacity is just one of the few factors to consider when investing in a storage infrastructure solution. In fact, capacity is probably the least important piece when considering a storage solution to achieve the company’s OKRs. If we think about it deeper, setting the foundation for Data in the defensive manner will help elevate value of the data to be promoted with the offensive strategies to gain the competitive advantage.
Storage infrastructure and storage solutions along with data management platforms may appear to be a cost to the annual budgets. If you know set the OKRs, define A to get to B, alignment the goals, storage infrastructure and the data management platforms and practices are investments that are worth their weight in gold. That is my guarantee.
On the flip side, ignoring and avoiding OKRs, and set the strategies without prudence will yield its comeuppance. Technical debts will prevail.
There was a Super Blue Moon a few days ago. It was a rare sky show. Friends of mine who are photo and moon gazing enthusiasts were showing off their digital captures online. One ignorant friend, who was probably a bit envious of the other people’s attention, quipped that his Oppo Reno 10 Pro Plus can take better pictures. Oppo Reno 10 Pro Plus claims 3x optical zoom and 120x digital zoom. Yes, 120 times!
Yesterday, a WIRED article came out titled “How Much Detail of the Moon Can Your Smartphone Really Capture?” It was a very technical article. I thought the author did an excellent job explaining the physics behind his notes. But I also found the article funny, flippant even, when I juxtaposed this WIRED article to what my envious friend was saying the other day about his phone’s camera.
Super Blue Moon 2023
Open Source storage expectations and outcomes
I work for iXsystems™. Open Source has been its DNA for over 30 years. Similarly, I have also worked on Open Source (decades before it was called open source) in my home labs ever since I entered the industry. I had SoftLanding Linux System 3.5″ diskette (Linux kernel 0.99), and I bought a boxed set of FreeBSD OS from Walnut Creek (photo below). My motivation was to learn as much as possible about information technology world because I was making my first steps into building my career (I was also quietly trying to prove my father wrong) in the IT industry.
FreeBSD Boxed Set (circa 1993)
Open source has democratized technology. It has placed the power of very innovative technology into the hands of the common people With Open Source, I see the IT landscape changing as well, especially for home labers like myself in the early years. Social media platforms, FAANG (Facebook, Apple, Amazon, Netflix, Google), etc, etc, have amplified that power (to the people). But with that great power, comes great responsibility. And some users with little technology background start to have hallucinated expectations and outcomes. Just like my friend with the “powerful” Oppo phone.
Likewise, in my world, I have plenty of anecdotes of these types of open source storage users having wild expectations, but little skills to exact the reality.