Right time for Andrew. The Filesystem that is.

I couldn’t hold my excitement when I discovered Auristor® early last week. I stumbled upon this Computerweekly article “Want to side step Public Cloud? Auristor® offers global file storage.” Given the many news not exactly praising the public cloud storage vendors nowadays, the article’s title caught my attention. Immediately Andrew File System (AFS) was there. I was perplexed at first because I have never seen or heard a commercial version of AFS before. This news gave me goosebumps.

For the curious, I am sure many will ask who is this Andrew anyway? What is my relationship with this Andrew?

One time with Andrew

A bit of my history. I recalled quite vividly helping Intel in Penang, Malaysia to implement their globally distributed file caching mechanism with the NetApp® filer’s NFS. It was probably 2001 and I believed Intel wanted to share their engineering computing (EC) files between their US facilities and Intel Penang Design Center (PDC). As I worked along with the Intel folks, I found out that this distributed file caching technology was called Andrew File System (AFS).

Although I couldn’t really recalled how the project went, I remembered it being a bed of bugs at that time. But being the storage geek that I am, I obviously took some time to get to know Andrew the File System. 20 years have gone by, and I never really thought of AFS coming out as a commercial solution or even knew of it as one, until Auristor®,

Auristor Logo

A little about Andrew

Given my shallow knowledge then and now, I try to recall the file system architecture over the weekend. I googled it, of course. Here is how much I learned from my last weekend fling.

In a nutshell, AFS is a single namespace distributed global filesystem that links shared file resources across multiple trusted computers. It is location agnostic, and is based on a client-server architecture.

AFS client-server architecture

The AFS client sees the /afs filesystem as part of the local filesystem directory tree, and each “location” is considered a cell, running an independent AFS service. Each cell could be a different domain, where each domain hosts a group of trusted file server resources. In this manner, the domain could be a public domain such as abc.com or xyz.com or a networked filesystem within the LAN.

The AFS service consists of many other functions beside just presenting the volumes and file services in the cells. There other important services such as control services, caching manager, Kerberos KDC (key distribution center), backup and update services and more. I have yet to learn more of it but each of these AFS services are important to maintain the scalability and the reliability of the AFS ecosystem. Having battle proven in a unreliable wide area distributed networks and untrusted networks from the mid-90s to early 2000s, AFS, I believe, has grown up to address the scale of distributed file services, securely and privately akin a private CDN (content delivery network) for files.

The time is ripe and right

20 year ago, Andrew File System had challenges. I thought about the poor network speeds dominant at that time – maybe X.25, SDH/SONET, Frame Relay or something else. It did not matter because latency was poor over wide area network. There were a lot of scripting involved, and I remember one Intel Penang engineer by name of Julian who became a Unix guru with the amount of AFS and NFS work involved. Sadly there were a lot of tribal knowledge, and some of the more junior Intel folks had a hard time keeping up. So, “automated” files distribution was that. Plenty of shell scripts and crontab configurations everywhere.

Bring forward 20 years, much have changed. MPLS came into the picture after 2001 as the gold standard for wide area network, and the Internet has gotten speedier, and more reliable. Many organizations have became global, the need for distributed files across a global namespace is fast becoming mission critical. All organizations are collaborating in one form or another for competitive advantage and the need to have an extensive global file sharing technology has become vital. That is why I believe it is right time for Auristor® to bring AFS into the world. By now, the technology is ripe and mature, and battle proven.

Furthermore, inherent to AFS is the strong authentication and access control capability. The authentication is based on Kerberos, a well-known and tested authentication protocol developed for non-secure networks, which is suitable for the Internet networks of today.

Making the connections and more

Lastly, I connected to a couple of people at Auristor® last week. They are not named Andrew Carnegie and Andrew Mellon – that was how the Andrew File System name came about but named Jeffrey Altman, Auristor®’s CEO and Gerry Seidman, their President. They were very affable and quickly share the knowledge and experience with me. Such friendly demeanour exemplifies the culture of the company.

I am still catching up to know more about Auristor®. There is still plenty to learn and plenty to share but I have revived and rehydrated my thirst of this venerable file system. So here is to the revival of Andrew File System and ode to Auristor® to bring this wonderful file system technology to the rest of us.

 

Tagged , , , , , , , , , , , , , , . Bookmark the permalink.

About cfheoh

I am a technology blogger with 25+ years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and as of October 2013, I have been appointed as SNIA South Asia & SNIA Malaysia non-voting representation to SNIA Technical Council. I currently run a small system integration and consulting company focusing on storage and cloud solutions, with occasional consulting work on high performance computing (HPC).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.