The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company. Continue reading

Sexy HPC storage is all the rage

HPC is sexy

There is no denying it. HPC is sexy. HPC Storage is just as sexy.

Looking at the latest buzz from Super Computing Conference 2018 which happened in Dallas 2 weeks ago, the number of storage related vendors participating was staggering. Panasas,, Excelero, BeeGFS, are the ones that I know because I got friends posting their highlights. Then there are the perennial vendors like IBM, Dell, HPE, NetApp, Huawei, Supermicro, and so many more. A quick check on the SC18 website showed that there were 391 exhibitors on the floor.

And this is driven by the unrelentless demand for higher and higher performance of computing, and along with it, the demands for faster and faster storage performance. Commercialization of Artificial Intelligence (AI), Deep Learning (DL) and newer applications and workloads together with the traditional HPC workloads are driving these ever increasing requirements. However, most enterprise storage platforms were not designed to meet the demands of these new generation of applications and workloads, as many have been led to believe. Why so?

I had a couple of conversations with a few well known vendors around the topic of HPC Storage. And several responses thrown back were to put Flash and NVMe to solve the high demands of HPC storage performance. In my mind, these responses were too trivial, too irresponsible. So I wanted to write this blog to share my views on HPC storage, and not just about its performance.

The HPC lines are blurring

I picked up this video (below) a few days ago. It was insideHPC Rich Brueckner interview with Dr. Goh Eng Lim, HPE CTO and renowned HPC expert about the convergence of both traditional and commercial HPC applications and workloads.

I liked the conversation in the video because it addressed the 2 different approaches. And I welcomed Dr. Goh’s invitation to the Commercial HPC community to work with the Traditional HPC vendors to help push the envelope towards Exascale SuperComputing.

Continue reading

The Malaysian Openstack storage conundrum

The Openstack blippings on my radar have ratcheted up this year. I have been asked to put together the IaaS design several times, either with the flavours of RedHat or Ubuntu, and it’s a good thing to see the Openstack interest level going up in the Malaysian IT scene. Coming into its 8th year, Openstack has become a mature platform but in the storage projects of Openstack, my observations tell me that these storage-related projects are not as well known as we speak.

I was one of the speakers at the Openstack Malaysia 8th Summit over a month ago. I started my talk with question – “Can anyone name the 4 Openstack storage projects?“. The response from the floor was “Swift, Cinder, Ceph and … (nobody knew the 4th one)” It took me by surprise when the floor almost univocally agreed that Ceph is one of the Openstack projects but we know that Ceph isn’t one. Ceph? An Openstack storage project?

Besides Swift, Cinder, there is Glance (depending on how you look at it) and the least known .. Manila.

I have also been following on many Openstack Malaysia discussions and discussion groups for a while. That Ceph response showed the lack of awareness and knowledge of the Openstack storage projects among the Malaysian IT crowd, and it was a difficult issue to tackle. The storage conundrum continues to perplex me because many whom I have spoken to seemed to avoid talking about storage and viewing it like a dark art or some voodoo thingy.

I view storage as the cornerstone of the 3 infrastructure pillars  – compute, network and storage – of Openstack or any software-defined infrastructure stack for that matter. So it is important to get an understanding the Openstack storage projects, especially Cinder.

Cinder is the abstraction layer that gives management and control to block storage beneath it. In a nutshell, it allows Openstack VMs and applications consume block storage in a consistent and secure way, regardless of the storage infrastructure or technology beneath it. This is achieved through the cinder-volume service which is a driver most storage vendors integrate with (as shown in the diagram below).

Diagram in slides is from Mirantis found at

Diagram in slides is from Mirantis found at

Cinder-volume together with cinder-api, and cinder-scheduler, form the Block Storage Services for Openstack. There is another service, cinder-backup which integrates with Openstack Swift but in my last check, this service is not as popular as cinder-volume, which is widely supported by many storage vendors with both Fibre Channel and iSCSi implementations, and in a few vendors, with NFS and SMB as well. Continue reading

Of Object Storage, Filesystems and Multi-Cloud

Data storage silos everywhere. The early clarion call was to eliminate IT data storage silos by moving to the cloud. Fast forward to the present. Data storage silos are still everywhere, but this time, they are in the clouds. I blogged about this.

Object Storage was all the rage when it first started. AWS, with its S3 (Simple Storage Service) offering, started the cloud storage frenzy. Highly available, globally distributed, simple to access, and fitted superbly into the entire AWS ecosystem. Quickly, a smorgasbord of S3-compatible, S3-like object-based storage emerged. OpenStack Swift, HDS HCP, EMC Atmos, Cleversafe (which became IBM SpectrumScale), Inktank Ceph (which became RedHat Ceph), Bycast (acquired by NetApp to be StorageGrid), Quantum Lattus, Amplidata, and many more. For a period of a few years prior, it looked to me that the popularity of object storage with an S3 compatible front has overtaken distributed file systems.

What’s not to like? Object storage are distributed, they are metadata rich (at a certain structural level), they are immutable (hence secure from a certain point of view), and some even claim self-healing (depending on data protection policies). But one thing that object storage rarely touted dominance was high performance I/O. There were some cases, but they were either fronted by a file system (eg. NFSv4.1 with pNFS extensions), or using some host-based, SAN-client agent (eg. StorNext or Intel Lustre). Object-based storage, in its native form, has not been positioned as high performance I/O storage.

A few weeks ago, I read an article from Storage Soup, Dave Raffo. When I read it, it felt oxymoronic. SwiftStack was just nominated as a visionary in the Gartner Magic Quadrant for Distributed File Systems and Object Storage. But according to Dave’s article, Swiftstack did not want to be “associated” with object storage that much, even though Swiftstack’s technology underpinning was all object storage. Strange.

Continue reading

The rise of RDMA

I have known of RDMA (Remote Direct Memory Access) for quite some time, but never in depth. But since my contract work ended last week, and I have some time off to do some personal development, I decided to look deeper into RDMA. Why RDMA?

In the past 1 year or so, RDMA has been appearing in my radar very frequently, and rightly so. The speedy development and adoption of NVMe (Non-Volatile Memory Express) have pushed All Flash Arrays into the next level. This pushes the I/O and the throughput performance bottlenecks away from the NVMe storage medium into the legacy world of SCSI.

Most network storage interfaces and protocols like SAS, SATA, iSCSI, Fibre Channel today still carry SCSI loads and would have to translate between NVMe and SCSI. NVMe-to-SCSI bridges have to be present to facilitate the translation.

In the slide below, shared at the Flash Memory Summit, there were numerous red boxes which laid out the SCSI connections and interfaces where SCSI-to-NVMe translation (and vice versa) would be required.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Why demote archived data access?

We are all familiar with the concept of data archiving. Passive data gets archived from production storage and are migrated to a slower and often, cheaper storage medium such tapes or SATA disks. Hence the terms nearline and offline data are created. With that, IT constantly reminds users that the archived data is infrequently accessed, and therefore, they have to accept the slower access to passive, archived data.

The business conditions have certainly changed, because the need for data to be 100% online is becoming more relevant. The new competitive nature of businesses dictates that data must be at the fingertips, because speed and agility are the new competitive advantage. Often the total amount of data, production and archived data, is into hundred of TBs, even into PetaBytes!

The industries I am familiar with – Oil & Gas, and Media & Entertainment – are facing this situation. These industries have a deluge of files, and unstructured data in its archive, and much of it dormant, inactive and sitting on old tapes of a bygone era. Yet, these files and unstructured data have the most potential to be explored, mined and analyzed to realize its value to the organization. In short, the archived data and files must be democratized!

The flip side is, when the archived files and unstructured data are coupled with a slow access interface or unreliable storage infrastructure, the value of archived data is downgraded because of the aggravated interaction between access and applications and business requirements. How would organizations value archived data more if the access path to the archived data is so damn hard???!!!

An interesting solution fell upon my lap some months ago, and putting A and B together (A + B), I believe the access path to archived data can be unbelievably of high performance, simple, transparent and most importantly, remove the BLOODY PAIN of FILE AND DATA MIGRATION!  For storage administrators and engineers familiar with data migration, especially if the size of the migration is into hundreds of TBs or even PBs, you know what I mean!

I have known this solution for some time now, because I have been avidly following its development after its founders left NetApp following their Spinnaker venture to start Avere Systems.


Continue reading

SMB Witness Protection Program

No, no, FBI is not in the storage business and there are no witnesses to protect.

However, SMB 3.0 has introduced a RPC-based mechanism to inform the clients of any state change in the SMB servers. Microsoft calls it Service Witness Protocol [SWP], and its objective is provide a much faster notification service allow the SMB 3.0 clients to do a failover. In previous SMB 1.0 and even in SMB 2.x, the SMB clients rely on time-out services. The time-out services, either SMB or TCP, could take up as much as 30-45 seconds, and this creates a high latency that is disruptive to enterprise applications.

SMB 3.0, as mentioned in my previous post, had a total revamp, and is now enterprise ready. In what Microsoft calls “Continuously Available” File Service, the SMB 3.0 supports clustered or scale-out file servers. The SMB shares must be shared as “Continuously Available” shares and mapped to SMB 3.0 clients. As shown in the diagram below (provided by SNIA’s webinar),

SMB 3.0 CA Shares

Client A mapping to Server 1 share (\\srv1\CAshr). Client A has a share “handle” that establishes a connection with a corresponding state of the session. The state of the session is synchronously kept consistent with a corresponding state in Server 2.

The Service Witness Protocol is not responsible for the synchronization of the states in the SMB file server cluster. Microsoft has left the HA/cluster/scale-out capability to the proprietary technology method of the NAS vendor. However, SWP regularly observes the status of all services under its watch. Continue reading