Whitewashing Cloudsh*t

Pardon my French but I just had about enough of it!

I was invited to attend the Internet Alliance Association‘s event today at OneWorld Hotel. It was aptly titled “Global Trends on Cloud Technology”. I don’t know much about the Internet Alliance but I was intrigued by the event because I wanted to know what the Malaysian hosting and service providers are doing on the cloud. I was not in touch with the hosting providers landscape for a few years now, so I was like an eager-beaver, raring to learn more.

After registration, I quickly went to the first booth behind the front counter. He said he was a cloud consultant, so I asked what his company does. He said they provide IaaS, PaaS and so on. I asked him if I could purchase IaaS with a credit card and what was the turnaround time to get a normal server with Windows 2008 running.

He obliged with a yes. They accept credit card purchases. But the turnaround to have the virtual server ready is 1 day. It would take 24-hours before I get a virtual server running Windows. So, I assumed the entire process was manual and I told him that. He assured me that the whole process is automatic. At the back of my mind, if this was automatic, will it take 24-hours? Reality set in when I realized I am dealing with a Malaysian company. Ah, I see.

A few more sentences were exchanged. He told me that they are hosted at AIMS, a popular choice. I inquired about their Disaster Recovery. They don’t have a disaster recovery. More perplexity for me. Hmmm …

In the end, I was kinda turned off by his “story” about how great they are, better than Freenet and AIMS and so on. If they are better than AIMS, why host their cloud at AIMS?

I went to another booth which had a sign call “1-Nimbus”. The number “1” is the usual 1-Malaysia Logo with the word “Nimbus” next to it. Here’s that “1” logo below.

It was the word “Nimbus” that capture my attention. I thought, “Wow, is this really Nimbus?” Apparently not. Probably some Malaysian company borrowed that name .. we are smart that way. “1-Nimbus, Cloud Backup”, it read. I asked the chap (another consultant), who gave me the brochure, “How does it work?” “Does it require any agent?”

“Err, actually, I am not really technical. Let me refer you to my colleague”. A bespectacled chap popped over and introduced himself as a technical guy. I asked again, “How does this cloud backup work?”. His reply … “Err, it’s not really our product. Go check out the website”, and gave me another brochure.  Damn!

From then on, there were more excuses as I kept repeating the same questions from one booth to another – tell me what you do in the cloud? Right now, I decided to do a pie chart of how I assessed the exhibition lobby floor.

 

I went on. There were about 15 booths. With exception of Falconstor, only one booth managed to tell me some decent stuff. They were KumoWorks and the guy spoke well about their Cloud Desktop with Citrix and iGel thin client. And they are from Singapore. It figures!

I cannot but to feel nauseated by most of the booths at the OneWorld Hotel exhibition lobby. If this is the state our “Cloud Service Providers”, I think we are in deep sh*t. Whitewashing aside and over using the word “Cloud” everywhere is one thing. These guys don’t even know what they are talking about. It is about time we admit that the Singaporeans are better than us. Even they might not know their stuff well, at least they know how to package the whole thing and BS to me intelligently!

And I learned a new “as-a-Service” today. One cloud consultant introduced me to “Application-as-a-Service”. I was so tempted to call it “Ass“.

NetApp to buy Commvault?

The rumour mill is going again that Commvault is an acquisition target, and this time, NetApp. The rumour is not new but someone Commvault has gotten too big in the past couple of years to be swallowed up. But this time, it could happen as NetApp is hungry, …. very hungry.

NetApp took a big hit a couple of weeks back, when it announced its Q3 numbers. Revenues fell short of analysts expectations and the share price took a big hit. While its big rival, EMC, has been gaining much momentum on all fronts, it appears that NetApp is getting overwhelmed by the one-stop-shop of EMC. EMC is everything to everyone who wants storage, data protection software, services, data management, scale-out, data security, big data, cloud storage and virtualization and much more. NetApp, has been very focused on what they do best, and that is storage. Everything evolves around their crown jewel, Data ONTAP and recently added Engenio to their stable of storage solutions.

NetApp does not mix the FAS storage with the Engenio and making sure that their story-telling gels but in the past few years, many other vendors are taking the “one-stack-fits-all” approach. Oracle have Exadata, where servers, storage, database and networking in all-in-one. Many others are doing the same, while NetApp prefers a more “loose-coupled” partnerships, such as their “Imagine Virtually Anything” concept partnership with VMware and Cisco, in the shape of FlexPod. FlexPod is a flexible infrastructure package comprising presized storage, networking and server components designed to ease the IT transformation journey–from virtualization all the way to cloud computing.

Commvault would be a great buy (going to be very expensive buy) for NetApp. Things fits perfectly if NetApp decides to abandon its overly protective shield and start becoming a “one-stop-shop” to its customers, starting with data protection. Commvault is already the market leader in the Enterprise Disk-based Backup and Recovery market, and well reflected in Gartner’s Magic Quadrant January 2011 report.

It’s amazing to see how Commvault got to become the leader in this space in just a few short years, and part of its unique approach is providing a common core engine called the Common Technology Engine (CTE). The singular core architecture allows different data management components – Backup, Replication, Archiving, Resource Management and Classification & Search – to share resource and more importantly detailed knowledge of true data management.

In the middle of this year, NetApp had an OEM deal with Commvault to resell their SnapProtect solution, which integrates with NetApp’s SnapMirror solution. The SnapProtect manages NetApp snapshots and SnapMirror replications and also enhances the solution as a tape-out for SnapMirror. Below shows how the Commvault SnapProtect fits into NetApp’s snapshots and SnapMirror data protection architecture.

 

Sources of NetApp’s C-Level said that NetApp is still very much focused on their ONTAP strategy and with their “loosely-coupled” partnerships with key partners like VMware, Cisco, F5 and Quantum. But at the back of NetApp’s mind, I believe, it is time to do something about it. This “focused” (also could be interpreted an overly cautious) approach is probably seeing the last leg of its phase as cloud computing is changing all that. The cost of integration of different, yet flexible components of storage, data protection and data management components, is prohibitive to cloud service providers and NetApp must take a bolder approach to win the hearts of these providers. Having a one-stop-shop isn’t so bad anymore; it is beginning to make sense and NetApp had better do something quick. Commvault is one of the best out there and NetApp shouldn’t lose that chance.

Note: While the rumours of NetApp and Commvault are swirling, there’s been rumours that Quantum could be another NetApp target. 

The recipe for storage performance modeling

Good morning, afternoon, evening, Ladies & Gentlemen, wherever you are.

Today, we are going to learn how to bake, errr … I mean, make a storage performance model. Before we begin, allow me to set the stage.

Don’t you just hate it when you are asked to do storage performance sizing and you don’t have a freaking idea how to get started? A typical techie would probably say, “Aiya, just use the capacity lah!”, and usually, they will proceed to size the storage according to capacity. In fact, sizing by capacity is the worst way to do storage performance modeling.

Bear in mind that storage is not a black box, although some people wished it was. It is not black magic when it comes to performance sizing because things can be applied in a very scientific and logical manner.

SNIA (Storage Networking Industry Association) has made a storage performance modeling methodology (that’s quite a mouthful), and basically simplified it into these few key ingredients. This recipe is for storage performance modeling in general and I am advising you guys out there to engage your storage vendors professional services. They will know their storage solutions best.

And I am going to say to you – Don’t be cheap and not engage professional services – to get to the experts out there. I was having a chat with an consultant just now at McDonald’s. I have known this friend of mine for about 6-7 years now and his name is Sugen Sumoo, the Director of DBORA Consulting. They specialize in Oracle and database performance tuning and performance forecasting and it is something that a typical DBA can’t do, because DBORA Consulting is the Professional Service that brings expertise and value to Oracle customers. Likewise, you have to engage your respective storage professional services as well.

In a cook book or a cooking show, you are presented with the ingredients used and in this recipe for storage performance modeling, the ingredients (in no particular order) are:

  • Application block size
  • Read and Write ratio
  • Application access patterns
  • Working set size
  • IOPS or throughput
  • Demand intensity

Application Block Size

First of all, the storage is there to serve applications. We always have to look from the applications’ point of view, not storage’s point of view.  Different applications have different block size. Databases typically range from 8K-64K and backup applications usually deal with larger block sizes. Video applications can have 256K block sizes or higher. It all depends.

The best way is to find out from the DBA, email administrator or application developers. The unfortunate thing is most so-called technical people or administrators in Malaysia doesn’t have a clue about the applications they manage. So, my advice to you storage professionals, do your research on the application and take the default value. These clueless fellas are likely to take the default.

Read and Write ratio

Applications behave differently at different times of the day, and at different times of the month (no, it’s not PMS). At the end of the financial year or calendar, there are some tasks that these applications do as well. But in a typical day, there are different weightage or percentage of read operations versus write operations.

Most OLTP (online transaction processing)-based applications tend to be read heavy and write light, but we need to find out the ratio. Typically, it can be a 2:1 ratio or 60%:40%, but it is best to speak to the application administrators about the ratio. DSS (Decision Support Systems) and data warehousing applications could have much higher reads than writes while a seismic-analysis applications can have multiple writes during the analysis periods. It all depends.

To counter the “clueless” administrators, ask lots of questions. Find out the workflow of several key tasks and ask what that particular tasks do at different checkpoints of the application’s processing. If you are lazy (please don’t be lazy, because it degrades your value as a storage professional), use a rule of thumb.

Application access patterns

Applications behave differently in general. They can be sequential, like backup or video streaming. They can be random like emails, databases at certain times of the day, and so on. All these behavioral patterns affect how we design and size the disks in the storage.

Some RAID levels tend to work well with sequential access and others, with random access. It is not difficult to find out about the applications’ pattern and if you read more about the different RAID-levels in storage, you can easily identify the type of RAID levels suitable for each type of behavioral patterns.

Working set size

This variable is a bit more difficult to determine. This means that a chunk of the application has to be loaded into a working area, usually memory and cache memory, to be used and abused by the application users.

Unless someone is well versed with the applications, one would not be able to determine how much of the applications would be placed in memory and in cache memory. Typically, this can only be determined after the application has been running for some time.

The flexibility of having SSDs, especially the DRAM-type of SSDs, are very useful to ensure that there is sufficient “working space” for these applications.

IOPS or Throughput

According to SNIA model, for I/O less than 64K, IOPS should be used as a yardstick to do storage performance modeling. Anything larger, use throughput, in which MB/sec is the measurement unit.

The application guy would be able to tell you what kind of IOPS their application is expecting or what kind of throughput they want. Again, ask a lot of questions, because this will help you determine the type of disks and the kind of performance you give to the application guys.

If the application guy is clueless again, ask someone more senior or ask the vendor. If the vendor engineers cannot give you an answer, then they should not be working for the vendor.

Demand intensity

This part is usually overlooked when it comes to performance sizing. Demand intensity refers to how intense is the I/O requests. It could come from 1 channel or 1 part of the applications, or it could come from several parts of the applications in parallel. It is as if the storage is being ‘bombarded’ by applications and this is the part that is hard to determine as well.

In some applications, the degree of intensity or parallelism can be tuned and to find out, ask the application administrator or developer. If not, ask the vendor. Also do a lot of research on the application’s architecture.

And one last thing. What I have learned is to add buffers to the storage performance model. Typically I would add about 10-20% extra but you never know. As storage professionals, I would strongly encourage to engage professional services, because it is worthwhile, especially in the early stages of the sizing. It is usually a more expensive affair to size it after the applications have been installed and running.

“Failure to plan is planning to fail”.  The recipe isn’t that difficult. Go figure it out.

btrfs – a better butter from a better COW?

There’s better a lot of chatter about the default file system in the recently released Fedora 16. Fedora 16 was released on November 8, 2011. For months, there were rumours that the default file system for Fedora 16 will be btrfs (btree file system, or better known as butter-FS). But after the release, the default file system of Fedora 16 is still ext4, much to the dismay of many. btrfs holds a lot of potential because in the space of less than 4 years, it has moved up the hierarchy to be one of the top file systems in the Linux ecosystem.

File systems in Linux has not seen had a knight in shining armour for the long time. The file systems kings of the Linux world were ext2/3 for the RedHat and Debian flavoured distros and reiserfs for the SuSE  flavoured distros. ext4 is now default in most Linux distros but the concept of  ext2/3/4 has not changed much since its inception decades ago.

At the same time, reiserfs  had a lot of promise as well, but its development and progress have lost it lustre after its principal developer, Han Reiser was convicted of the murder of his wife a few years ago. If you are KPC (Malaysian Chinese colloquialism, meaning busybody), you can read the news here.

btrfs is going to be the new generation of file systems for Linux and even Ted T’so, the CTO of Linux Foundation and principal developer admitted that he believed btrfs is the better direction because “it offers improvements in scalability, reliability, and ease of management”.

For those who has studied computer science, B-Tree is a data structure that is used in databases and file systems. B-Tree is an excellent data structure to store billions and billions of objects/data and is able to provide fast data retrieval in logarithmic time. And the B-Tree implementation is already present in some of the file systems such as JFS, XFS and ReiserFS. However, these file systems are not shadow-paging filesystems (popularly known as copy-on-write file systems).

You see, B-Tree, in its native form of implementation, is very incompatible with COW file systems. In fact, the implementation was thought of impossible, until someone by the name of Ohad Rodeh came along. He presented a paper in Usenix FAST ’07 which described the challenges of combining the B-Tree concept with shadow paging file systems. The solution, as he described it, was to introduce insert/remove key into the tree structure, and removing the dependency of intra-leaf linking.

Chris Mason, one of the developers of reiserfs, took Ohad’s idea and created a shadow-paging file system based on the B-Tree idea.

Traditional file systems tend to follow the idea of the Berkeley Fast File Systems. Different cylinder groups have its own inode, bitmap and disk blocks. The used space in one cylinder group cannot be shared to another cylinder group, resulting in wastage. At the same time, performance can be an issue as the disk read/head frequently have to move to the inodes to find out where the next used or free blocks will be. In a way, it looks like the diagram below.

Chris Mason’s idea of btrfs made the file system looked like this:

Today, a quick check in btrfs wiki page, shows that the main Btrfs features available at the moment include:

  • Extent based file storage
  • 2^64 byte == 16 EiB maximum file size
  • Space-efficient packing of small files
  • Space-efficient indexed directories
  • Dynamic inode allocation
  • Writable snapshots, read-only snapshots
  • Subvolumes (separate internal filesystem roots)
  • Checksums on data and metadata
  • Compression (gzip and LZO)
  • Integrated multiple device support
  • RAID-0, RAID-1 and RAID-10 implementations
  • Efficient incremental backup
  • Background scrub process for finding and fixing errors on files with redundant copies
  • Online filesystem defragmentation

And Chris Mason and his team have still plenty more to offer for btrfs. It will likely be the default file system in Fedora 17 and at the rate it is going, could be the file system of choice for RedHat, Debian and SuSE distros very soon.

There’s been a lot of comparisons between btrfs and ZFS, since both are part of Oracle now. ZFS is obviously a much more mature file system, with more enterprise features and more robust, (Incidentally, ZFS just celebrated its 10th birthday on Halloween 2011 – see Matt Ahren’s blog) and the btrfs is the rising star in the Linux world. But at this moment, the 2 file systems are set apart in their market positioning and deployment.

ZFS is based on the CDDL (Common Development and Distribution License) while btrfs is based on GNU GPL, open source licensing. There are controversies surrounding CDDL licensing scheme, while is incompatible with GNU GPL scheme.

Oracle can count itself very lucky to have 2 of the most promising and prominent COW file systems around. It will be interesting to see what Oracle will do next. As a proponent of innovation, community and sharing, I sincerely hope that both file systems will continue to thrive in Oracle’s brutal, sales-driven organization. We certainly don’t want to see controversies about dual ownership BS of Oracle and mySQL that could end in 2015.

NetApp SPECSfs record broken in 13 days


Thanks for my buddy, Chew Boon of HDS who put me on alert about the new leader of the SPECSfs benchmark results. NetApp “world record” has been broken 13 days later by Avere Systems.

Avere has posted the result of 1,564,404 NFS ops/sec with an ORT (overall response time) of 0.99 msec. This benchmark was done by 44 nodes, using 6.808 TB of memory, with 800 HDDs.

Earlier this month, NetApp touted fantastic results and quickly came out with a TR comparing their solution with EMC Isilon. Here’s a table of the comparison

 

But those numbers are quickly made irrelevant by Avere FXT, and Avere claims to have the world record title with the “smallest footprint ever”. Here’s a comparison in Avere’s blog, with some photos to boot.

 

For the details of the benchmark, click here. And the news from PR Newswire.

If you have not heard of Avere, they are basically the core team of Spinnaker. NetApp acquired Spinnaker in 2003 to create the clustered file systems from the Spinnaker technology. The development and integration of Spinnaker into NetApp’s Data ONTAP took years and was buggy, and this gave the legroom for competitors like Isilon to take market share in the clustered NAS/scale-out NAS landscape.

Meanwhile, NetApp finally came did come good with the Spinnaker technology and with ONTAP 8.0.1 and 8.1, the codes of both platforms merged into one.

The Spinnaker team went on develop a new technology called the “A-3 Architecture” (shown below) and positioned itself as a NAS Accelerator.

avere-nas-1

The company has 2 series of funding and now has a high performance systems to compete with the big boys. The name, Avere Systems, is still pretty much unknown in this part of the world and this “world record” will help position them stronger.

But as I have said before, benchmarking are just ways to have bigger bragging rights. It is a game of leapfrogging, and pretty soon this Avere record will be broken. It is nice while it lasts.

Fujitsu don’t get it

Fujitsu is pissed. Last week, they announced their super high-end, the Eternus DX8700. They want to compete with the biggest and the baddest babies out there – IBM DS8800, EMC VMAX and HDS VSP. And they are approaching it from the angle of being modular versus guys like EMC and HDS, which they liken to be monolithic storage arrays.

As quoted from The Register“Fujitsu says its DX8700 S2 rewrites the rules by making the leap from monolithic to modular architecture”. 

Well, congratulations Fujitsu! The Japanese company makes great products with the typical precision and quality of Japanese products. And there is no doubt that their DX8000 series is a power monger in its own rights.

It boasts of 6Gbps internal SAS backplane, SAS support, FCoE and 10 Gigabit Ethernet. The DX8700 supports up to 8 controllers, with 3,072 2.5″ HDDs or SSDs or 1,536 3.5″ drives. With 3TB drives capacity, it reaches 4 petabyte raw capacity (3.6PB usable). And it will have automated storage tiering with its Flexible Data Management feature, come early next year.  Bind-in-Cache feature is there for pinning certain data in memory for performance reasons as well as the support for 32,768 snapshots.

All these features and specs are fantastic but in Malaysia, Fujitsu is hardly making waves in the storage industry. A lunch date with a friend of mine who has just joined Fujitsu Malaysia to sell the Eternus product revealed the sentiments of product in the local market. Eternus who?

First of all, marketing is sorely lacking. Not many people know about Fujitsu’s storage product and they are also quite conservative in their advertising. This means that my friend in Fujitsu has a mountain to climb when it comes to convincing partners and customer about the Eternus product.

Secondly, they have to build a solutions ecosystem. The need for Eternus to handle backup with 3rd party vendors such as Symantec NetBackup, Commvault and other is glaring. There are also other areas such as archiving, disaster recovery and so on. And don’t get me started with Tier 1 applications such as Microsoft Exchange, Oracle database, SQL Server and VMware. They are just not known to many here in Malaysia, except to those who are close to Fujitsu. Even worse, some of their partners don’t even know about Fujitsu’s storage solutions.

I am sure these solutions have already been validated in the US and in Japan and works but come on, Fujitsu, if you don’t tell people about it, no one would know. I don’t see software vendors and their SE or sales saying that “our products work well with Fujitsu Eternus“. Who’s Fujitsu Eternus?

Thirdly, it is about mind share and this bring the Fujitsu sales force and their partners to go out there and pitch, pitch and pitch. It is important to educate these fellas to sell and get in front of the customers to pitch Fujitsu Eternus.

So what if Fujitsu is pissed? Yes, they are because competitors probably belittled them. But they have to change their ways here in Malaysia to get noticed. They must get it.

The future is intelligent objects

We are used to block-based approach and also the file-based approach to data. The 2 diagrams below shows the basics of how we access data in both block-based and file-based data on the storage device.

 

For block-based , the storage of the blocks is merely in arrays of unrelated contiguous blocks. For file-based, as seen below,

 

there is another layer of abstraction, and this is called the file system. But if you seen both diagrams above, there are some random numbers in light blue and that is to represent the storage device, the hard disk drive’s export of “containers” to the file system or the application that is accessing the storage device. This is usually the LBA (Logical Block Addressing), which is basically set of schematics that defines the locations on the hard disk drives. LBA tells the location of where the data is stored. For more information about LBA, check out this Wikipedia definition. But the whole idea is LBA is dumb. It is pretty much static and exported to file systems and applications so that these guys can do something with it.

There’s something brewing in the background since 1994 and it is one of the many efforts to make intelligent storage devices. This new object-based interface was part of the research project done by Carnegie-Mellon University (CMU). Initially, it was known as Network Attached Secure Disk (NASD) but eventually made its way to the working group in SNIA, and developing it for ANSI T10 INCITS standard. ANSI T10 is the guardian of all SCSI standards. This is called Object Storage Device (OSD). The SCSI architecture diagram below shows the layer where OSD resides.

 

The motivation for this simple: To make storage devices of today to do more computational work, in particularly I/O, relieving the hosts and the local systems to concentrate other computational processing work. And the same time, the local systems must have some level of interactivity and management between the storage object and the computational hosts.

In the diagram below which compares both block-based and OSD,

 

you can see the separation of file system management interface that is at the kernel-space of the local host/system and this is replaced by the OSD Management interface at the storage device.

What does this all mean? This means that using LBA type of addressing that we are familiar with in the block-based and file-based storage is no longer the way to go, because as I mentioned before, LBA is dumb.

OSD, in some way, replaces the LBA with OIDs (Object IDs). The existing local system and/or its file system will interact with the storage devices with OIDs and the OIDs links to its respective objects storage. And the object will carry a lot of metadata, that represents the object, giving it the intelligent and management capability of the object.

 

 

The prominence of the metadata in the OSD would mean that we can build much more intelligent systems in the future. The OIDs and the objects can be grouped together in a flat design or can be organized and categorized in a virtual, hierarchical model.

 

Object storage is an intelligent evolution of disk drives that can store and serve objects rather than simply place data on tracks and sectors. And it can bring the following benefits:

  • Intelligent space management in the storage layer
  • Data aware pre-fetching and caching
  • Robust shared access by multiple clients
  • Scalable performance using off-loaded data path
  • Reliability security

Several vendors such as EMC and NetApp are already supporting OSD.

No more Huawei-Symantec

Huawei-Symantec is no more. Last night, Huawei, the China telecom giant has bought the remaining 49% of Symantec shares for USD530 million.

The joint venture was initially set up in 2007 to focus on 3S – Server, Storage and Security. With the consolidation under one owner, and one name, Huawei will continue to focus on the 3S. And since Huawei owns telco as well, a lot of the 3S solutions will come into play and it is likely that Huawei will become a cloud player as well.

Highroad to Parallel Road

Unless you are working with highly, parallelized access to files in a large scale-out NAS environment, you probably don’t get to work much with Parallel NFS (pNFS). pNFS is part of the NFSv4.1 (RFC 5661) standard, and was introduced in January 2010 to address NFS protocol in the clustered, scale-out NAS environment. It is to provide parallel file access across distributed servers.

pNFS is heavily driven by Panasas, NetApp, EMC, IBM, Sun (now Oracle) among others. And funnily enough, the company that sticks out from the bunch is one that used to tout block storage as the way to go, not files. That’s EMC, the company that more well known for its SAN solutions than its NAS (remember Celerra and IP4700?). And EMC has embraced pNFS in a big, big way. Read EMC’s CTO for Global Marketing, Chuck Hollis’ blog here and here.

However, unknown to many, a lot of the thinking that goes into pNFS are very similar to an EMC product some years ago. That product is EMC Highroad, which in the later years, renamed as Multi-path File System (MPFS).

Note: If you want to know more about the history of HighRoad/MPFS, read this blog.

The cornerstone of EMC MPFS is their File Mapping Protocol or FMP, which is a robust protocol that lines the mapping of files to their addressable blocks in storage. In a nutshell, when I was made responsible for this product during my time at EMC, I used to pitch to companies that MPFS was a file request is through NFS but respond to the requester can be in blocks (iSCSI or Fibre Channel). The beauty of this was, NFSv3 was chatty and heavy but the delivery of data through blocks via iSCSI or Fibre Channel has lesser overhead compared to NFSv3.

Hence the delivery is faster and EMC touted that the performance was 2-4x faster than NFS. Indeed, I have seen some lab tests results from EMC’s work with Schlumberger High Performance Lab in Houston, and the numbers were impressive. I still have them on Powerpoint somewhere.

In circa of 2003-2004, EMC donated the FMP code to IETF and as they say the rest was history.

The picture below basically summarizes what pNFS is all about.

 

A NFSv4 client will basically check with a Metadata server via the pNFS protocol about the layout of the distributed cluster of server. The file, could be striped across multiple nodes of the cluster, and it is the metadata server that provides a map to the client to access the blocks or files from these nodes. This is exactly what the EMC HighRoad/MPFS File Mapping Protocol (FMP) did, mapping the file requests to its respective corresponding blocks. See diagram below:

 

And one of the powerful feature of pNFS is that it is not just about NFS. The green arrow you see in the above diagram is the storage-access protocol. That access protocol can be NFSv4.1, CIFS, iSCSI, Fibre Channel, FCoE, Infiniband, and Object Storage Device (OSD).

In order to have pNFS working, the NFS client must be NFSv4.1 ready and that code has been made available in Linux and OpenSolaris. Other Unix vendors, no doubt, will be coming out with their NFSv4.1 implementation soon. Oooooh, there will be a Windows NFSv4.1 client coming as well!

But I want to dispel the notion that EMC is a SAN company. EMC is a very strong NAS company and if you have seen the IDC market share (ok, ok, many of you out there will argue about it), EMC is #1 in NAS. And their contribution to pNFS is immense.

NFSv4 – Your filesystem librarian

I was up at about 1am several nights ago listening to the SNIA update of NFSv4 and it was worth losing sleep. I was both intrigued and interested to see a full scale NFSv4 adoption in the coming few years.

We all know that NFS has been around for more than 20 years already. Version 2 was released way back in 1989, with version 3 being around since 1995. That’s a long time, and it is beginning to show its age. NFS version 4 is not new as well. Believe or not, it was released in 2000, 11 years ago. But there was a significant update in 2010, with NFS version 4.1 and this came with parallel NFS (pNFS) support to address new requirements such as scale-out NAS and file striping across clusters of nodes.

I am doing my responsible bit to disseminate NFS version 4.x updates and moving the storage networking and filesystem community towards its imminent wide scaled adoption. This is one of the many entries I intend to share about NFS version 4.

So, why this librarian thingy? First of all, for those folks working on NFSv3, you would probably encounter issues about file locking and also high availability.

Don’t you just hate it when the server reboots, and your NFS mount point hangs? Depending if it was a hard mount or soft mount, the NFS retries could take forever or sometimes, the NFS clients just freezes. In additional to that, another frequent complaint is that NFSv3 has lousy file locking.

However, in the beginning years of NFS, the world was a very different place. Such issues about file locking and HA were very well addressed by NFSv2/v3 because the demands of the previous client-server world were lesser. As the world progressed in the 2nd millenium, NFS v2/v3 started to sputter.

In this lesson#1, I would like to share about 2 key features of NFSv4 to address the 2 issues I brought up – which is NFS HA and file locking. The 2 features are

  • Lease
  • Delegation

As I said, NFS high availability in version 3 was simplistic. In version 3, if an NFS client fails, the NFS server has little knowledge that the client has failed. Remember, NFSv3 is stateless. This can cause complications and issues such as ambiguity about file locking. Likewise, if an NFS server fails, the client could freeze and if recovered, could get stale NFS handles and have all sorts of problems related to file locking. Many locks have to be released before an application can restart properly.

In NFSv4, things related to either the NFS server or client failing with regards to file locking are much more simplified. The mechanism is leasing and both the client and server will know what happening to each other. NFSv4 is a stateful protocol.  This is how leasing works:

  1. The NFS client leases a file lock from the NFS server for a certain period of time, eg. N seconds. It renews the lock with after the N seconds period has expired.
  2. If the NFS client fails, the lock is reclaimed by the server and released by the NFS server to other clients after a grace period
  3. If the NFS server fails and rebooted, all the files are locked for M seconds for the incumbent NFS clients to reclaim the locks. If they are not reclaimed by the respective client, the file lock is released.

Such an agreement with the stateful communication between the clients and the NFS server, makes file locking in an high availability environment much simpler and more robust.

Another new feature of NFSv4 is delegation. Part of the exported filesystem from the NFS server can be delegated or “loaned out” to the NFS client. The NFS client which had “borrowed” this piece of the filesystem can then work on it in its local cache with little communication to the NFS server (performance gain and reduced chattiness of the previous version).  Here’s a step-by-step guide of this delegation process.

  1. NFS client request a piece of the filesystem. The NFS server “lends” this piece to the client if it is not already locked, as a lease.
  2. NFS client works on this piece in its local cache
  3. Once the NFS client has completed the writes and commits to the piece of filesystem on the local cache, it releases the “borrowing” lease back to the NFS server
  4. If other NFS clients requests for the same piece of filesystem while it out on loan, the NFS server would say “Sorry, I loaned it out”
  5. If there is a high order authority requesting for the piece of filesystem, the NFS server would say to the NFS client, “I need this back” and will send an order to recall the filesystem

In many ways, both file locking lease and delegation works like a librarian. Pieces of the filesystem are loaned to the requester for a lease period much like books are loaned to the borrower for a period of time.

Enjoy your weekend!