Open Source and Open Standards open the Future

[Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley USA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer and I was not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog is of my own opinions and views]

Western Digital dived into Storage Field Day 19 in full force as they did in Storage Field Day 18. A series of high impact presentations, each curated for the diverse requirements of the audience. Several open source initiatives were shared, all open standards to address present inefficiencies and designed and developed for a greater future.

Zoned Storage

One of the initiatives is to increase the efficiencies around SMR and SSD zoning capabilities and removing the complexities and overlaps of both mediums. This is the Zoned Storage initiatives a technical working proposal to the existing NVMe standards. The resulting outcome will give applications in the user space more control on the placement of data blocks on zone aware devices and zoned SSDs, collectively as Zoned Block Device (ZBD). The implementation in the Linux user and kernel space is shown below:

Continue reading

Green Storage? Meh!

Something triggered my thoughts a few days ago. A few of us got together talking about climate change and a friend asked how green was the datacenter in IT. With cloud computing booming, I would say that green computing isn’t really the hottest thing at present. That in turn, leads us to one of the most voracious energy beasts in the datacenter, storage. Where is green storage in the equation?

What is green?

Over the past decade, several storage related technologies were touted as more energy efficient. These include

  • Tape – when tapes are offline, they do not consume power and do not require cooling
  • Virtualization – Virtualization reduces the number of servers and desktops, and of course storage too
  • MAID (Massive Array of Independent Disks) – the arrays spin down the HDDs if idle for a period of time
  • SSD (Solid State Drives) – Compared to HDDs, SSDs consume much less power, and overall reduce the cooling needs
  • Data Footprint Reduction – Deduplication, compression and other technologies to reduce copies of data
  • SMR (Shingled Magnetic Recording) Drives – Higher areal density means less drives but limited by physics.

The largest gorilla in storage technology

HDDs still dominate the market and they are the biggest producers of heat and vibration in a storage array, along with the redundant power supplies and fans. Until and unless SSDs dominate, we have to live with the fact that storage disk drives are not green. The statistics from Statistica below forecasts that in 2021, the shipment of SSDs will surpass HDDs.

Today the areal density of HDDs have increased. With SMR (shingled magnetic recording), the areal density jumped about 25% more than the 1Tb/inch (Terabit per inch) in the CMR (conventional magnetic recording) drives. The largest SMR in the market today is 16TB from Seagate with 18TB SMR in the horizon. That capacity is going to grow significantly when EAMR (energy assisted magnetic recording) – which counts heat assisted and microwave assisted – drives enter the market next year. The areal density will grow to 1.6Tb/inch with a roadmap to 4.0Tb/inch. Continue reading

Brainy Commvault

[Disclosure: I was invited by Commvault as a Media person and Social Ambassador to their Commvault GO 2019 Conference and also a Tech Field Day eXtra delegate from Oct 13-17, 2019 in the Denver CO, USA. My expenses, travel, accommodation and conference fees were covered by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The waltz across the Commvault-Hedvig mine field will not be easy. Commvault will have a lot of open discussions about their acquisition of Hedvig and how Hedvig “primary storage platform” will fit into a “secondary storage framework” of Commvault. The outcome of this consummation is yet to appear as a structured form. The storyline will eventually form as Commvault’s diligence to define their strategy moving forward.

Day 1

Day 1 was my open day at Commvault GO. I was absorbing the first impressions of Commvault again even though this was my third Commvault GO, after Washington DC and Nashville in 2017 and 2018 respectively. There was certainly a “startup” feeling again in Commvault since the appointment of Sanjay Mirchandani as CEO 9 months ago.

A lot of excitement and buzz were generated around the metallic, the Commvault venture into Software-as-a-Service (SaaS). The SaaS solution is targeted at the mid-market for organizations with 500-2500 staff count. Its simplicity and pricing were the 2 things which gave me a good feeling all over. There is even a 45-day trial for metallic.

Getting Brainy

My Day 2 itinerary was more specific because my agenda for this trip was to seek answers to the realization of Commvault-Hedvig.

Commvault took the distinction of using the vision of a DataBrain (#databrain) to define their strategy. From the picture below, the left and right hemisphere of the DataBrain forms the Storage Management piece on the left and Data Management on the right.

Continue reading

Commvault coming all together

[Disclosure: I was invited by Commvault as a Media person and Social Ambassador to their Commvault GO 2019 Conference and also a Tech Field Day eXtra delegate from Oct 13-17, 2019 in the Denver CO, USA. My expenses, travel, accommodation and conference fees were covered by Commvault, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

This trip to the Commvault GO conference was pretty much a mission to find answers to their Hedvig acquisition just a month ago. It was an unprecedented move for Commvault and I, as an industry observer and pundit, took the news positively. I wrote in my blog about Commvault’s big bet and I liked their boldness in their approach.

But the news did not bode well back here in Malaysia. The local technology news portal, Data Storage Asean picked up the news in a rather unconvinced way. 2 long time Commvault partners I spoke to were obviously unhappy because the acquisition made little sense to them on the back of closing of the Commvault Malaysia office just weeks before this with more unsettling rumours of the Commvault team in Asia Pacific. The broken trust and the fear of what the future held for the Commvault customers in Malaysia and in the region were riding along with me on this trip.

But I have seen the beginning of the Commvault transformation from the Commvault GO conferences I have attended since 2017. This is my 3rd Commvault GO and I ended Day 1 with good vibes.

Here were some of my highlights in the first day. Continue reading

Commvault big bet

I woke up at 2.59am in the morning of Sept 5th morning, a bit discombobulated and quickly jumped into the Commvault call. The damn alarm rang and I slept through it, but I got up just in time for the 3am call.

As I was going through the motion of getting onto UberConference, organized by GestaltIT, I was already sensing something big. In the call, Commvault was acquiring Hedvig and it hit me. My drowsy self centered to the big news. And I saw a few guys from Veritas and Cohesity on my social media group making gestures about the acquisition.

I spent the rest of the week thinking about the acquisition. What is good? What is bad? How is Commvault going to move forward? This is at pressing against the stark background from the rumour mill here in South Asia, just a week before this acquisition news, where I heard that the entire Commvault teams in Malaysia and Asia Pacific were released. I couldn’t confirm the news in Asia Pacific, but the source of the news coming from Malaysia was strong and a reliable one.

What is good?

It is a big win for Hedvig. Nestled among several scale-out primary storage vendors and little competitive differentiation, this Commvault acquisition is Hedvig’s pay day.

Continue reading

Hybrid is the new Black

It is hard for enterprise to let IT go, isn’t it?

For years, we have seen the cloud computing juggernaut unrelenting in getting enterprises to put their IT into public clouds. Some of the biggest banks have put their faith into public cloud service providers. Close to home, Singapore United Overseas Bank (UOB) is one that has jumped into the bandwagon, signing up for VMware Cloud on AWS. But none will come bigger than the US government Joint Enterprise Defense Infrastructure (JEDI) project, where AWS and Azure are the last 2 bidders for the USD10 billion contract.

Confidence or lack of it

Those 2 cited examples should be big enough to usher enterprises to confidently embrace public cloud services, but many enterprises have been holding back. What gives?

In the past, it was a matter of confidence and the FUDs (fears, uncertainties, doubts). News about security breaches, massive blackouts have been widely spread and amplified to sensationalize the effects and consequences of cloud services. But then again, we get the same thing in poorly managed data centers in enterprises and government agencies, often with much less fanfare. We shrug our shoulder and say “Oh well!“.

The lack of confidence factor, I think, has been overthrown. The “Cloud First” strategy in enterprises in recent years speaks volume of the growing and maturing confidence in cloud services. The poor performance and high latency reasons, which were once an Achilles heel of cloud services, are diminishing. HPC-as-a-Service is becoming real.

The confidence in cloud services is strong. Then why is on-premises IT suddenly is a cool thing again? Why is hybrid cloud getting all the attention now?

Hybrid is coming back

Even AWS wants on-premises IT. Its Outposts offering outlines its ambition. A couple of years earlier, the Azure Stack was already made beachhead on-premises in its partnership with many server vendors. VMware, is in both on-premises and the public clouds. It has strong business and technology integration with AWS and Azure. IBM Cloud, Big Blue is thinking hybrid as well. 2 months ago, Dell jumped too, announcing Dell Technologies Cloud with plenty of a razzmatazz, using all the right moves with its strong on-premises infrastructure portfolio and its crown jewel of the federation, VMware. Continue reading

Storage Performance Considerations for AI Data Paths

The hype of Deep Learning (DL), Machine Learning (ML) and Artificial Intelligence (AI) has reached an unprecedented frenzy. Every infrastructure vendor from servers, to networking, to storage has a word to say or play about DL/ML/AI. This prompted me to explore this hyped ecosystem from a storage perspective, notably from a storage performance requirement point-of-view.

One question on my mind

There are plenty of questions on my mind. One stood out and that is related to storage performance requirements.

Reading and learning from one storage technology vendor to another, the context of everyone’s play against their competitors seems to be  “They are archaic, they are legacy. Our architecture is built from ground up, modern, NVMe-enabled“. And there are more juxtaposing, but you get the picture – “We are better, no doubt“.

Are the data patterns and behaviours of AI different? How do they affect the storage design as the data moves through the workflow, the data paths and the lifecycle of the AI ecosystem?

Continue reading

Scaling new HPC with Composable Architecture

[Disclosure: I was invited by Dell Technologies as a delegate to their Dell Technologies World 2019 Conference from Apr 29-May 1, 2019 in the Las Vegas USA. Tech Field Day Extra was an included activity as part of the Dell Technologies World. My expenses, travel, accommodation and conference fees were covered by Dell Technologies, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

Deep Learning, Neural Networks, Machine Learning and subsequently Artificial Intelligence (AI) are the new generation of applications and workloads to the commercial HPC systems. Different from the traditional, more scientific and engineering HPC workloads, I have written about the new dawn of supercomputing and the attractive posture of commercial HPC.

Don’t be idle

From the business perspective, the investment of HPC systems is high most of the time, and justifying it to the executives and the investors is not easy. Therefore, it is critical to keep feeding the HPC systems and significantly minimize the idle times for compute, GPUs, network and storage.

However, almost all HPC systems today are inflexible. Once assigned to a project, the resources pretty much stay with the project, even when the workload processing of the project is idle and waiting. Of course, we have to bear in mind that not all resources are fully abstracted, virtualized and software-defined whereby you can carve out pieces of the hardware and deliver a percentage of that resource. Case in point is the CPU, where you cannot assign certain clock cycles of CPU to one project and another half to the other. The technology isn’t there yet. Certain resources like GPU is going down the path of Virtual GPU, and into the realm of resource disaggregation. Eventually, all resources of the HPC systems – CPU, memory, FPGA, GPU, PCIe channels, NVMe paths, IOPS, bandwidth, burst buffers etc – should be disaggregated and pooled for disparate applications and workloads based on demands of usage, time and performance.

Hence we are beginning to see the disaggregated HPC systems resources composed and built up the meet the diverse mix and needs of HPC applications and workloads. This is even more acute when a AI project might grow cold, but the training of AL/ML/DL workloads continues to stay hot

Liqid the early leader in Composable Architecture

Continue reading

Lift and Shift Begone!

I am excited. New technologies are bringing the data (and storage) closer to processing and compute than ever before. I believe the “Lift and Shift” way would be a thing of the past … soon.

Data is heavy

Moving data across the network is painful. Moving data across distributed networks is even more painful. To compile the recent first image of a black hole, an amount of 5PB or more had to shipped for central processing. If this was moved over a 10 Gigabit network, it would have taken weeks.

Furthermore, data has dependencies. Snapshots, clones, and other data relationships with applications and processes render data inert, weighing it down like an anchor of a ship.

When I first started in the industry more than 25 years ago, Direct Attached Storage (DAS) was the dominating storage platform. I had a bulky Sun MultiDisk Pack connected via Fast SCSI to my SPARCstation 2 (diagram below):

Then I was assigned as the implementation engineer for Hock Hua Bank (now defunct) retail banking project in their Sibu HQ in East Malaysia. It was the first Sun SPARCstorage 1000 (photo below), running a direct attached Fibre Channel 0.25 Gbps FCAL (Fibre Channel Arbitrated Loop). It was the cusp of the birth of SAN (Storage Area Network).

Photo from https://www.cca.org/dave/tech/sys5/

The proliferation of SAN over the next 2 decades pushed DAS into obscurity, until SAS (Serial Attached SCSI) came about. Added to the mix was the prominence of Cloud Storage. But on-premises storage and Cloud Storage didn’t always come together. There was always a valley between the 2, until the public clouds gained a stronger foothold in the minds of IT and businesses. Today, both on-premises storage and cloud storage are slowly cosying as one Data Singularity, thanks to vision and conceptualization of data fabrics. NetApp was an early proponent of the Data Fabric concept 4 years ago. Continue reading

Bridges to the clouds and more – NetApp NDAS

[Preamble: I have been invited by GestaltIT as a delegate to their Tech Field Day for Storage Field Day 18 from Feb 27-Mar 1, 2019 in the Silicon Valley USA. My expenses, travel and accommodation were covered by GestaltIT, the organizer and I was not obligated to blog or promote their technologies presented at this event. The content of this blog is of my own opinions and views]

The NetApp Data Fabric Vision

The NetApp Data Fabric vision has always been clear to me. Maybe it was because of my 2 stints with them, and I got well soaked in their culture. 3 simple points define the vision.

  • The Data Fabric is THE data singularity. Data can be anywhere – on-premises, the clouds, and more.
  • Have bridges, paths and workflows management to the Data, to move the data to wherever the data may be.
  • Work with technology partners to build tools and data systems to elevate the value of the data

That is how I see it. I wrote about the Transcendence of the Data Fabric vision 3+ years ago, and I emphasized the importance of the Data Pipeline in another NetApp blog almost a year ago. The introduction of NetApp Data Availability Services (NDAS) in the recently concluded Storage Field Day 18 was no different as NetApp constructs data bridges and paths to the AWS Cloud.

NetApp Data Availability Services

The NDAS feature is only available with ONTAP 9.5. With less than 5 clicks, data from ONTAP primary systems can be backed up to the secondary ONTAP target (running the NDAS proxy and the Copy to Cloud API), and then to AWS S3 buckets in the cloud.

Continue reading