Brainy Commvault

[Disclosure: I was invited by Commvault as a Media person and Social Ambassador to their Commvault GO 2019 Conference and also a Tech Field Day eXtra delegate from Oct 13-17, 2019 in the Denver CO, USA. My expenses, travel, accommodation and conference fees were covered by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The waltz across the Commvault-Hedvig mine field will not be easy. Commvault will have a lot of open discussions about their acquisition of Hedvig and how Hedvig “primary storage platform” will fit into a “secondary storage framework” of Commvault. The outcome of this consummation is yet to appear as a structured form. The storyline will eventually form as Commvault’s diligence to define their strategy moving forward.

Day 1

Day 1 was my open day at Commvault GO. I was absorbing the first impressions of Commvault again even though this was my third Commvault GO, after Washington DC and Nashville in 2017 and 2018 respectively. There was certainly a “startup” feeling again in Commvault since the appointment of Sanjay Mirchandani as CEO 9 months ago.

A lot of excitement and buzz were generated around the metallic, the Commvault venture into Software-as-a-Service (SaaS). The SaaS solution is targeted at the mid-market for organizations with 500-2500 staff count. Its simplicity and pricing were the 2 things which gave me a good feeling all over. There is even a 45-day trial for metallic.

Getting Brainy

My Day 2 itinerary was more specific because my agenda for this trip was to seek answers to the realization of Commvault-Hedvig.

Commvault took the distinction of using the vision of a DataBrain (#databrain) to define their strategy. From the picture below, the left and right hemisphere of the DataBrain forms the Storage Management piece on the left and Data Management on the right.

Continue reading

Commvault coming all together

[Disclosure: I was invited by Commvault as a Media person and Social Ambassador to their Commvault GO 2019 Conference and also a Tech Field Day eXtra delegate from Oct 13-17, 2019 in the Denver CO, USA. My expenses, travel, accommodation and conference fees were covered by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

This trip to the Commvault GO conference was pretty much a mission to find answers to their Hedvig acquisition just a month ago. It was an unprecedented move for Commvault and I, as an industry observer and pundit, took the news positively. I wrote in my blog about Commvault’s big bet and I liked their boldness in their approach.

But the news did not bode well back here in Malaysia. The local technology news portal, Data Storage Asean picked up the news in a rather unconvinced way. 2 long time Commvault partners I spoke to were obviously unhappy because the acquisition made little sense to them on the back of closing of the Commvault Malaysia office just weeks before this with more unsettling rumours of the Commvault team in Asia Pacific. The broken trust and the fear of what the future held for the Commvault customers in Malaysia and in the region were riding along with me on this trip.

But I have seen the beginning of the Commvault transformation from the Commvault GO conferences I have attended since 2017. This is my 3rd Commvault GO and I ended Day 1 with good vibes.

Here were some of my highlights in the first day. Continue reading

Commvault big bet

I woke up at 2.59am in the morning of Sept 5th morning, a bit discombobulated and quickly jumped into the Commvault call. The damn alarm rang and I slept through it, but I got up just in time for the 3am call.

As I was going through the motion of getting onto UberConference, organized by GestaltIT, I was already sensing something big. In the call, Commvault was acquiring Hedvig and it hit me. My drowsy self centered to the big news. And I saw a few guys from Veritas and Cohesity on my social media group making gestures about the acquisition.

I spent the rest of the week thinking about the acquisition. What is good? What is bad? How is Commvault going to move forward? This is at pressing against the stark background from the rumour mill here in South Asia, just a week before this acquisition news, where I heard that the entire Commvault teams in Malaysia and Asia Pacific were released. I couldn’t confirm the news in Asia Pacific, but the source of the news coming from Malaysia was strong and a reliable one.

What is good?

It is a big win for Hedvig. Nestled among several scale-out primary storage vendors and little competitive differentiation, this Commvault acquisition is Hedvig’s pay day.

Continue reading

Hybrid is the new Black

It is hard for enterprise to let IT go, isn’t it?

For years, we have seen the cloud computing juggernaut unrelenting in getting enterprises to put their IT into public clouds. Some of the biggest banks have put their faith into public cloud service providers. Close to home, Singapore United Overseas Bank (UOB) is one that has jumped into the bandwagon, signing up for VMware Cloud on AWS. But none will come bigger than the US government Joint Enterprise Defense Infrastructure (JEDI) project, where AWS and Azure are the last 2 bidders for the USD10 billion contract.

Confidence or lack of it

Those 2 cited examples should be big enough to usher enterprises to confidently embrace public cloud services, but many enterprises have been holding back. What gives?

In the past, it was a matter of confidence and the FUDs (fears, uncertainties, doubts). News about security breaches, massive blackouts have been widely spread and amplified to sensationalize the effects and consequences of cloud services. But then again, we get the same thing in poorly managed data centers in enterprises and government agencies, often with much less fanfare. We shrug our shoulder and say “Oh well!“.

The lack of confidence factor, I think, has been overthrown. The “Cloud First” strategy in enterprises in recent years speaks volume of the growing and maturing confidence in cloud services. The poor performance and high latency reasons, which were once an Achilles heel of cloud services, are diminishing. HPC-as-a-Service is becoming real.

The confidence in cloud services is strong. Then why is on-premises IT suddenly is a cool thing again? Why is hybrid cloud getting all the attention now?

Hybrid is coming back

Even AWS wants on-premises IT. Its Outposts offering outlines its ambition. A couple of years earlier, the Azure Stack was already made beachhead on-premises in its partnership with many server vendors. VMware, is in both on-premises and the public clouds. It has strong business and technology integration with AWS and Azure. IBM Cloud, Big Blue is thinking hybrid as well. 2 months ago, Dell jumped too, announcing Dell Technologies Cloud with plenty of a razzmatazz, using all the right moves with its strong on-premises infrastructure portfolio and its crown jewel of the federation, VMware. Continue reading

Storage Performance Considerations for AI Data Paths

The hype of Deep Learning (DL), Machine Learning (ML) and Artificial Intelligence (AI) has reached an unprecedented frenzy. Every infrastructure vendor from servers, to networking, to storage has a word to say or play about DL/ML/AI. This prompted me to explore this hyped ecosystem from a storage perspective, notably from a storage performance requirement point-of-view.

One question on my mind

There are plenty of questions on my mind. One stood out and that is related to storage performance requirements.

Reading and learning from one storage technology vendor to another, the context of everyone’s play against their competitors seems to be  “They are archaic, they are legacy. Our architecture is built from ground up, modern, NVMe-enabled“. And there are more juxtaposing, but you get the picture – “We are better, no doubt“.

Are the data patterns and behaviours of AI different? How do they affect the storage design as the data moves through the workflow, the data paths and the lifecycle of the AI ecosystem?

Continue reading

Scaling new HPC with Composable Architecture

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. Tech Field Day Extra was an included activity as part of the Dell Technologies World. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Deep Learning, Neural Networks, Machine Learning and subsequently Artificial Intelligence (AI) are the new generation of applications and workloads to the commercial HPC systems. Different from the traditional, more scientific and engineering HPC workloads, I have written about the new dawn of supercomputing and the attractive posture of commercial HPC.

Don’t be idle

From the business perspective, the investment of HPC systems is high most of the time, and justifying it to the executives and the investors is not easy. Therefore, it is critical to keep feeding the HPC systems and significantly minimize the idle times for compute, GPUs, network and storage.

However, almost all HPC systems today are inflexible. Once assigned to a project, the resources pretty much stay with the project, even when the workload processing of the project is idle and waiting. Of course, we have to bear in mind that not all resources are fully abstracted, virtualized and software-defined whereby you can carve out pieces of the hardware and deliver a percentage of that resource. Case in point is the CPU, where you cannot assign certain clock cycles of CPU to one project and another half to the other. The technology isn’t there yet. Certain resources like GPU is going down the path of Virtual GPU, and into the realm of resource disaggregation. Eventually, all resources of the HPC systems – CPU, memory, FPGA, GPU, PCIe channels, NVMe paths, IOPS, bandwidth, burst buffers etc – should be disaggregated and pooled for disparate applications and workloads based on demands of usage, time and performance.

Hence we are beginning to see the disaggregated HPC systems resources composed and built up the meet the diverse mix and needs of HPC applications and workloads. This is even more acute when a AI project might grow cold, but the training of AL/ML/DL workloads continues to stay hot

Liqid the early leader in Composable Architecture

Continue reading

Lift and Shift Begone!

I am excited. New technologies are bringing the data (and storage) closer to processing and compute than ever before. I believe the “Lift and Shift” way would be a thing of the past … soon.

Data is heavy

Moving data across the network is painful. Moving data across distributed networks is even more painful. To compile the recent first image of a black hole, an amount of 5PB or more had to shipped for central processing. If this was moved over a 10 Gigabit network, it would have taken weeks.

Furthermore, data has dependencies. Snapshots, clones, and other data relationships with applications and processes render data inert, weighing it down like an anchor of a ship.

When I first started in the industry more than 25 years ago, Direct Attached Storage (DAS) was the dominating storage platform. I had a bulky Sun MultiDisk Pack connected via Fast SCSI to my SPARCstation 2 (diagram below):

Then I was assigned as the implementation engineer for Hock Hua Bank (now defunct) retail banking project in their Sibu HQ in East Malaysia. It was the first Sun SPARCstorage 1000 (photo below), running a direct attached Fibre Channel 0.25 Gbps FCAL (Fibre Channel Arbitrated Loop). It was the cusp of the birth of SAN (Storage Area Network).

Photo from https://www.cca.org/dave/tech/sys5/

The proliferation of SAN over the next 2 decades pushed DAS into obscurity, until SAS (Serial Attached SCSI) came about. Added to the mix was the prominence of Cloud Storage. But on-premises storage and Cloud Storage didn’t always come together. There was always a valley between the 2, until the public clouds gained a stronger foothold in the minds of IT and businesses. Today, both on-premises storage and cloud storage are slowly cosying as one Data Singularity, thanks to vision and conceptualization of data fabrics. NetApp was an early proponent of the Data Fabric concept 4 years ago. Continue reading

Bridges to the clouds and more – NetApp NDAS

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The NetApp Data Fabric Vision

The NetApp Data Fabric vision has always been clear to me. Maybe it was because of my 2 stints with them, and I got well soaked in their culture. 3 simple points define the vision.

  • The Data Fabric is THE data singularity. Data can be anywhere – on-premises, the clouds, and more.
  • Have bridges, paths and workflows management to the Data, to move the data to wherever the data may be.
  • Work with technology partners to build tools and data systems to elevate the value of the data

That is how I see it. I wrote about the Transcendence of the Data Fabric vision 3+ years ago, and I emphasized the importance of the Data Pipeline in another NetApp blog almost a year ago. The introduction of NetApp Data Availability Services (NDAS) in the recently concluded Storage Field Day 18 was no different as NetApp constructs data bridges and paths to the AWS Cloud.

NetApp Data Availability Services

The NDAS feature is only available with ONTAP 9.5. With less than 5 clicks, data from ONTAP primary systems can be backed up to the secondary ONTAP target (running the NDAS proxy and the Copy to Cloud API), and then to AWS S3 buckets in the cloud.

Continue reading

VAST Data must be something special

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Vast Data coming out bash!

The delegates of Storage Field Days were always the lucky bunch. We have witnessed several storage technology companies coming out of stealth at these Tech Field Days. The recent ones in memory for me were Excelero and Hammerspace. But to have one where the venerable storage doyen, Mr. Howard Marks, Vast Data new tech evangelist, to introduce the deep dive of Vast Data technology was something special.

For those who knew Howard, he is fiercely independent, very storage technology smart, opinionated and not easily impressed. As a storage technology connoisseur myself, I believe Howard must have seen something special in Vast Data. They must be doing something extremely unique and impressive that someone like Howard could not resist, and made him jump to the vendor side. This sets the tone of my blog.

Continue reading

Minio – the minimalist object storage technology

The Marie Kondo Konmari fever is sweeping the world. Her decluttering and organizing the home methods are leading to a new way of life – Minimalism.

Complicated Storage Experience

Storage technology and its architecture are complex. We layer upon layer of abstraction and virtualization into storage design until at some stage, choke points lead to performance degradation, and management becomes difficult.

I recalled a particular training I attended back in 2006. I just joined Hitachi Data Systems for the Shell GUSto project. I was in Baltimore for the Hitachi NAS course. This was not their HNAS (their BlueArc acquisition) but their home grown NAS based on Linux. In the training, we were setting up NFS service. There were 36 steps required to setup and provision NFS and if there was a misstep, you start from the first command again. Coming from NetApp at the time, it was horrendous. NetApp ONTAP NFS setup and provisioning probably took 3 commands, and this Hitachi NAS setup and configuration was so much more complex. In the end, the experience was just unworldly for me.

Introducing Minio to my world, to Malaysia

Continue reading