The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from https://aws.amazon.com/outposts/

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company.

It is the Data

The data singularity is important in hybrid cloud computing. Many has called it the Data Fabric, and NetApp was one of the early proponents of the data fabric story. Data Gravity is both a boon and a bane in the data fabric. Data has a way to attract other data services to it, and connects applications and workloads to achieve business objectives. At the same time, the attachments of data services such as data snapshots, data availability and data performance requirements adds mass to the data, accumulating weight as the data ages, making it difficult for the data to closer to the location of data processing.

Some of the data gravity points (A.P.P.A.R.M.S.C.) I have used regularly in my consulting work are:

  • Availabilty
  • Performance
  • Protection
  • Accessibility
  • Recovery
  • Management
  • Security
  • Compliance

If the data is on-premises but the cheaper computing resources is in the cloud, the movement of the data to the cloud is difficult. That is why we see many companies leveraging their technology to bridge the gap with lift-and-shift, data replication and synchronization, to craft the designs of hybrid cloud and multi-cloud services.

Furthermore, we also have to take into considerations of the applications’ and workloads’ workflow or Data Pipelines. A consistent and unencumbered data workflow, especially in applications with large datasets such as in Post-Production or Seismic Interpretation, is critical to organizations.

At the same time, we are seeing the brewing of specialized applications and workloads. Commercial HPC applications, driven heartily by machine learning, deep learning and Artificial Intelligence (AI) requirements, are sprouting within on-premises datacenters, where organizations are using their mountains of historical and present data to get data insights and predict new business advantages. Such data are usually locked within the organization for data privacy and sovereignty reasons, making it unsuitable for public clouds like AWS. Having spent the last 18 years of my 26 years engaging Oil & Gas upstream customers’ applications and data periodically, countries like Malaysia, Indonesia and Brunei, are extremely stringent to this legal data entity, and I am sure every oil producing country does the same to secure and protect their data.

Are they coming back or not?

Amidst the mini euphoria of AWS Outposts and the prospect of SAN and NAS coming back, I see a different form of SAN and NAS emerging. We are no longer solution or infrastructure-focused. We are application and workload focused. I see applications and workload-specific type of SAN and NAS. Application appliances, specialized to deliver a single workload data characteristics along with the entire data pipeline.

For example, for AI-specific workloads, I spoke about Pure Storage AIRI (AI Ready Infrastructure), ONTAP AI, and DDN A3I in my previous HPC blog. I spoke about use case of Excelero and Mellanox delivering high SAN performance in Openstack for workloads for an Oracle 12c RAC cluster at teuto.net at the 8th Openstack Malaysia summit.

These are not general applications and workloads. These are specialized applications and workloads that require on-premises high throughput, low latency networking. Regardless of the highest bandwidth available between the on-premises data and the cloud computing processing resources, SAN and NAS will always outclass the storage performance requirement when they are on the local network.

It is still early days of AWS Outposts. Their servers and infrastructure offerings will unlikely trigger a SAN and NAS return for general compute workloads and storage. But when the on-premises data that demands speeds and low latency grow, it might be something AWS Outposts might want to consider.

Specialized Application- and Workload-specific SAN and NAS Appliances. That is what I am thinking of in the next 2-3 years, as the pendulum swings to centre again.

What about DAS?

Direct Attached Storage (DAS) has evolved too. DAS has already become a specialized appliance of its own, although they were more towards infrastructure appliances rather than application and workload appliances. Scale-out VSAN and hyperconverged infrastructure platforms have defined DAS a few years ago, and continue to be so for another 1-2 years. NVMe and NVMeoF (NVMe over Fabrics) technology is breaking the hold of DAS in hyperconverged platforms, and as NVMeoF storage matures in the next 1-2 years, shared storage such as SAN will come back again. DAS and shared storage will harmonize and evolve to meet the new types of applications and workloads.

In the end …

In the end, there are no absolutes in IT. Nothing is constant and technology is ever changing. Your comments are welcome, as always 😉

 

 

Tagged , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 30 years of IT experience. I write heavily on technologies related to storage networking and data management because those are my areas of interest and expertise. I introduce technologies with the objectives to get readers to know the facts and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and between 2013-2015, I was SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently employed at iXsystems as their General Manager for Asia Pacific Japan.

One Response to The Return of SAN and NAS with AWS?

  1. Pingback: Making the Case for SAN and NAS on AWS - Gestalt IT

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.