The Return of SAN and NAS with AWS?

AWS what?

Amazon Web Services announced Outposts at re:Invent last week. It was not much of a surprise for me because when AWS had their partnership with VMware in 2016, the undercurrents were there to have AWS services come right at the doorsteps of any datacenter. In my mind, AWS has built so far out in the cloud that eventually, the only way to grow is to come back to core of IT services – The Enterprise.

Their intentions were indeed stealthy, but I have been a believer of the IT pendulum. What has swung out to the left or right would eventually come back to the centre again. History has proven that, time and time again.

SAN and NAS coming back?

A friend of mine casually spoke about AWS Outposts announcements. Does that mean SAN and NAS are coming back? I couldn’t hide my excitement hearing the return but … be still, my beating heart!

I am a storage dinosaur now. My era started in the early 90s. SAN and NAS were a big part of my career, but cloud computing has changed and shaped the landscape of on-premises shared storage. SAN and NAS are probably closeted by the younger generation of storage engineers and storage architects, who are more adept to S3 APIs and Infrastructure-as-Code. The nuts and bolts of Fibre Channel, SMB (or CIFS if one still prefers it), and NFS are of lesser prominence, and concepts such as FLOGI, PLOGI, SMB mandatory locking, NFS advisory locking and even iSCSI IQN are probably alien to many of them.

What is Amazon Outposts?

In a nutshell, AWS will be selling servers and infrastructure gear. The AWS-branded hardware, starting from a single server to large racks, will be shipped to a customer’s datacenter or any hosting location, packaged with AWS popular computing and storage services, and optionally, with VMware technology for virtualized computing resources.

Taken from

In a move ala-Azure Stack, Outposts completes the round trip of the IT Pendulum. It has swung to the left; it has swung to the right; it is now back at the centre. AWS is no longer public cloud computing company. They have just become a hybrid cloud computing company. Continue reading

The engineering of Elastifile

[Preamble: I was a delegate of Storage Field Day 12. My expenses, travel and accommodation were paid for by GestaltIT, the organizer and I was not obligated to blog or promote the technologies presented in this event]

When it comes to large scale storage capacity requirements with distributed cloud and on-premise capability, object storage is all the rage. Amazon Web Services started the object-based S3 storage service more than a decade ago, and the romance with object storage started.

Today, there are hundreds of object-based storage vendors out there, touting features after features of invincibility. But after researching and reading through many design and architecture papers, I found that many object-based storage technology vendors began to sound the same.

At the back of my mind, object storage is not easy when it comes to most applications integration. Yes, there is a new breed of cloud-based applications with RESTful CRUD API operations to access object storage, but most applications still rely on file systems to access storage for capacity, performance and protection.

These CRUD and CRUD-like APIs are the common semantics of interfacing object storage platforms. But many, many real-world applications do not have the object semantics to interface with storage. They are mostly designed to interface and interact with file systems, and secretly, I believe many application developers and users want a file system interface to storage. It does not matter if the storage is on-premise or in the cloud.

Let’s not kid ourselves. We are most natural when we work with files and folders.

Implementing object storage also denies us the ability to optimally utilize Flash and solid state storage on-premise when the compute is in the cloud. Similarly, when the compute is on-premise and the flash-based object storage is in the cloud, you get a mismatch of performance and availability requirements as well. In the end, there has to be a compromise.

Another “feature” of object storage is its poor ability to handle transactional data. Most of the object storage do not allow modification of data once the object has been created. Putting a NAS front (aka a NAS gateway) does not take away the fact that it is still object-based storage at the very core of the infrastructure, regardless if it is on-premise or in the cloud.

Resiliency, latency and scalability are the greatest challenges when we want to build a true globally distributed storage or data services platform. Object storage can be resilient and it can scale, but it has to compromise performance and latency to be so. And managing object storage will not be as natural as to managing a file system with folders and files.

Enter Elastifile.

Continue reading

Ryussi MoSMB – High performance SMB

I am back in the Silicon Valley as a Storage Field Day 12 delegate.

One of the early presenters was Ryussi, who was sharing a proprietary SMB server implementation of Linux and Unix systems. The first thing which comes to my mind was why not SAMBA? It’s free; It works; It has the 25 years maturity. But my experience with SAMBA, even in the present 4.x, does have its quirks and challenges, especially in the performance of large file transfers.

One of my customers uses our FreeNAS box. It’s a 50TB box for computer graphics artists and a rendering engine. After running the box for about 3 months, one case escalated to us was the SMB shares couldn’t be mapped all of a sudden. All the Windows clients were running version 10. Our investigation led us to look at the performance of SMB in the SAMBA 4 of FreeNAS.

This led to other questions such as the vfs_aio_pthread, FreeBSD/FreeNAS implementation of asynchronous I/O to overcome the performance weaknesses of the POSIX AsyncIO interface. The FreeNAS forum is flooded with sightings of missing SMB service that during large file transfer. Without getting too deep into the SMB performance issue, we decided to set the “Server Minimum Protocol” and “Server Maximum Protocol” to be SMB 2.1. The FreeNAS box at the customer has been stable now for the past 5 months.

Continue reading

The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Why demote archived data access?

We are all familiar with the concept of data archiving. Passive data gets archived from production storage and are migrated to a slower and often, cheaper storage medium such tapes or SATA disks. Hence the terms nearline and offline data are created. With that, IT constantly reminds users that the archived data is infrequently accessed, and therefore, they have to accept the slower access to passive, archived data.

The business conditions have certainly changed, because the need for data to be 100% online is becoming more relevant. The new competitive nature of businesses dictates that data must be at the fingertips, because speed and agility are the new competitive advantage. Often the total amount of data, production and archived data, is into hundred of TBs, even into PetaBytes!

The industries I am familiar with – Oil & Gas, and Media & Entertainment – are facing this situation. These industries have a deluge of files, and unstructured data in its archive, and much of it dormant, inactive and sitting on old tapes of a bygone era. Yet, these files and unstructured data have the most potential to be explored, mined and analyzed to realize its value to the organization. In short, the archived data and files must be democratized!

The flip side is, when the archived files and unstructured data are coupled with a slow access interface or unreliable storage infrastructure, the value of archived data is downgraded because of the aggravated interaction between access and applications and business requirements. How would organizations value archived data more if the access path to the archived data is so damn hard???!!!

An interesting solution fell upon my lap some months ago, and putting A and B together (A + B), I believe the access path to archived data can be unbelievably of high performance, simple, transparent and most importantly, remove the BLOODY PAIN of FILE AND DATA MIGRATION!  For storage administrators and engineers familiar with data migration, especially if the size of the migration is into hundreds of TBs or even PBs, you know what I mean!

I have known this solution for some time now, because I have been avidly following its development after its founders left NetApp following their Spinnaker venture to start Avere Systems.


Continue reading

All aboard Starboard

The Internet was abuzz with the emergence of a new storage player, uncloaking from stealth mode yesterday. Calling themselves Starboard Storage Systems, they plan to shake up the SMB/SME market with a full-featured storage system that competes with established players, but with a much more competitive pricing model.

Let’s face it. A cheaper price sells. That is the first door opener right there.

Their model, the AC72 has a starting price of USD$59,995. The capacity is 24TB, and also includes 3 SSDs for performance caching but I can’t really comment whether the price is a good price. It’s all relative and has different context and perceptions in different geographies. An HDD today is already 3 or 4TB, and a raw capacity of 24TB is pretty easy to fill up.

But Starboard Storage has a trick up its sleeve. An SMB/SME customer usually cannot afford to dedicate the storage systems for just one application. It cannot afford to, because it is too cost prohibitive. They are likely to have a mixed workload, with a combination of applications such as emails (think Exchange), databases (think SQL Server or mySQL), web applications, file server (think Windows Client) and server virtualization (think VMware or Hyper-V). They are likely to have a mix of Windows and Linux. Hence Starboard Storage combines all storage file- and block- protocols, in an “all-inclusive” license for NFS, CIFS, iSCSI and Fibre Channel.

An “all-inclusive” license is smart thinking because the traditional storage players such as EMC, IBM, NetApp and the rest of top 6 storage vendors today have far too prohibitive licenses cocktails. Imagine telling your prospect that your storage model is a unified storage and support all storage protocols, but they have to pay for have the protocol licenses turned on. In these days and times, the unified storage story is getting old and no longer a potent selling advantage.

The other features such as snapshots, thin provisioning should be a shoo-in, along with replication and flexible volumes (I am a NetApp guy ;-)) and aggregates (better known as Dynamic Storage Pools in Starboard Storage terminology) a must-have. I am really glad that Starboard Storage could the first to bundle everything in an “all-inclusive” license. Let’s hope the big 6 follow suit.

The AC72 also sports a SSD Accelerator tier for automatic performance caching, for both Reads and Writes. It is not known (at least I did not research this deep enough) whether the Accelerator Tier is also part of the “all-inclusive” license but if it is, then it is fantastic.

To Starboard Systems, their marketing theme are things related to high speed sailboats. The AC in AC72 stands for America’s Cup and AC72 is the high speed catamaran. Their technology is known as MAST (Mixed Workload, Application-Crafted Storage Tiering) architecture (that’s a mouthful) and the target is exactly that – a mixed workload environment – , just what the SMB/SME ordered.

The AC72 has dual redundant controllers for system availability and scales up to 474TB with the right expansion shelves. Starboard claims better performance in the Evaluator Group, IOmeter lab test, as shown in the graph below:

This allows Starboard to claim that they have the ” best density with leading dollar per GB and dollar per IOPS“.

To me, the window of opportunity is definitely there. For a long time, many storage vendors claim to be SMB/SME-friendly without being really friendly. SMB/SME customers have expanded their requirements and wanting more, but there are too much prohibitions and restrictions to get the full features from the respective storage systems vendors. The generosity of Starboard to include “all” licenses is definite a boon for SMB/SMEs and that means that someone has been listening to customers better than the others.

The appeal is there too, for Starboard Storage Systems, because looking at their specifications, I am sure they are not skimping on enterprise quality.


Speaking of enterprise storage systems, competitors in Taiwanese brands here in Malaysia such as Infortrend, QSAN aren’t exactly “cheap”. They might sound “cheap” because the local partners lack the enterprise mentality and frequently position these storage as cheap when comparing with the likes of EMC, HP, IBM and NetApp. The “quality” of these local partners in their ability to understand SMB/SME pain-points, ability to articulate benefits or design and architect the total storage and data management solutions for customers, contributes to the “cheap” mentality of customers.

I am not deriding these brands because they are good storage in their own right. But local partners of such brands tend to cut corners, cut price at the first customer’s hesitation, and worst of all, making outrageous claims, all contributing to the “cheap” mentality. These brands deserve better.

SMB/SME customers too, are at fault. Unlike what I have experienced with customers across the borders, both north and south of Malaysia, customers here take a very lackadaisical approach when they come to procure IT equipment and solutions for their requirements. They lack the culture to find out more and improve. It is always about comparing everything to the cheapest price!

Other brands such as Netgear, D-Link and QNAP are really consumer-grade storage disguising as “enterprise-class” storage for SMB/SME, most relying on lower grade CPUs such as Atom, Celeron and non-enterprise x86 CPUs and have less than 8GB of memory. I am not even sure if the memory sported in these boxes are ECC memory. Let’s not even go to the HDDs of these boxes, because most  SMB/SME customers in Malaysia lack the knowledge to differentiate MTBF hours of consumer and enterprise HDDs.

So, kudos to Starboard Storage Systems to package a superb bundle of enterprise features for an SMB/SME price, at least for the US market anyway. USD$59,995 is still a relatively high price in Malaysia because most SMB/SMEs still look at price as the point of discussion. But I believe Starboard brings a no-brainer, enterprise storage systems if they ever want to consider expansion in Malaysia.