Lift and Shift Begone!

I am excited. New technologies are bringing the data (and storage) closer to processing and compute than ever before. I believe the “Lift and Shift” way would be a thing of the past … soon.

Data is heavy

Moving data across the network is painful. Moving data across distributed networks is even more painful. To compile the recent first image of a black hole, an amount of 5PB or more had to shipped for central processing. If this was moved over a 10 Gigabit network, it would have taken weeks.

Furthermore, data has dependencies. Snapshots, clones, and other data relationships with applications and processes render data inert, weighing it down like an anchor of a ship.

When I first started in the industry more than 25 years ago, Direct Attached Storage (DAS) was the dominating storage platform. I had a bulky Sun MultiDisk Pack connected via Fast SCSI to my SPARCstation 2 (diagram below):

Then I was assigned as the implementation engineer for Hock Hua Bank (now defunct) retail banking project in their Sibu HQ in East Malaysia. It was the first Sun SPARCstorage 1000 (photo below), running a direct attached Fibre Channel 0.25 Gbps FCAL (Fibre Channel Arbitrated Loop). It was the cusp of the birth of SAN (Storage Area Network).

Photo from https://www.cca.org/dave/tech/sys5/

The proliferation of SAN over the next 2 decades pushed DAS into obscurity, until SAS (Serial Attached SCSI) came about. Added to the mix was the prominence of Cloud Storage. But on-premises storage and Cloud Storage didn’t always come together. There was always a valley between the 2, until the public clouds gained a stronger foothold in the minds of IT and businesses. Today, both on-premises storage and cloud storage are slowly cosying as one Data Singularity, thanks to vision and conceptualization of data fabrics. NetApp was an early proponent of the Data Fabric concept 4 years ago.

It is still Lift and Shift (For Now)

Today, many of the storage technology vendors are still espousing “Lift and Shift“. What is “Lift and Shift”? According to a definition by TechTarget,

Lift and shift is a strategy for moving an application or operation from one environment to another – without redesigning the app. In the lift-and-shift approach, certain workloads and tasks can be moved from on-premises storage to the cloud, or data operations might be transferred from one data center to another.”

So, to take advantage of the elasticity and the pay-per-use advantage of the cloud, on-premises applications have to moved to the cloud. Vice versa, applications in the cloud that have to deliver performance and latency commitments would have to moved to on-premises networks to deliver high speed to applications in corporate data centers and server rooms. It would be complicated to bridge both sides of the divide, cloud and on-premises, not to mention the hassle of it all.

A NIC is now a storage device

Did I mention I was excited? Certain advancement in networking technology and NVMe (NVMe-over-Fabric included) are blurring the demarcation of network and storage, making a cloud storage volume to behave and perform like a local storage. How cool is that!

My first tip of the hat goes to Amazon Web Services and the technology they acquired from Annapurna Labs in 2015. And the offspring of this acquisition is AWS Nitro. And from AWS Nitro, the C5 instance, bare-metal capability and the Outposts were born. AWS is awesome that way!

I am not deep diving into AWS Nitro because I might do injustice to the technology (plus I am still learning). The best is to check out the first iteration of Anthony Liguori’s Nitro Deep Dive at AWS Re:Invent 2017 in the video below.

Bringing forth the similar concept to the masses is Mellanox SNAP (Software-Defined Network Accelerated Processing). The Mellanox technology is really, really cool and someone in the storage industry called it Sexy NVMe Accelerated Processing. I thought this moniker is more fun.

At its core, SNAP has similar functionalities to AWS Nitro, but adding composable infrastructure features as well. Again, I am not in a position to go deep dive into SNAP (yet) but the diagram below shows how the Mellanox SmartNIC becomes an NVMe device to the OS or the hypervisor when it points north. More details can be found here.

About a month ago, I theorized that Microsoft wanted to acquire Mellanox Technologies because of their SNAP technology. Microsoft wanted to have an AWS C5 killer.

Changing the storage landscape (again!)

I wrote about NVMe bringing equilibrium to the storage landscape more than 2 years ago. It would end the wars of DAS vs networked storage. It would harmonize the warring factions. With the technologies like AWS Nitro and Mellanox SNAP, NVMe has become even the great healer in closing the divide between both on-premises storage and cloud storage as well. It is simply brilliant!

This means that data can flow freely between premises and the clouds, and removing the need to Lift to Shift. Obstructions to data pipelines and the need to copy from location to location, premise to premise, are reduced significantly.

I believe this data singularity vision will become mainstream soon. There will not be multiple of clouds. It will be just one piece of Data.

 

 

Tagged , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 25+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently run a small system integration and consulting company focusing on storage and cloud solutions, with occasional consulting work on high performance computing (HPC).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.