The reverse wars – DAS vs NAS vs SAN

It has been quite an interesting 2 decades.

In the beginning (starting in the early to mid-90s), SAN (Storage Area Network) was the dominant architecture. DAS (Direct Attached Storage) was on the wane as the channel-like throughput of Fibre Channel protocol coupled by the million-device addressing of FC obliterated parallel SCSI, which was only able to handle 16 devices and throughput up to 80 (later on 160 and 320) MB/sec.

NAS, defined by CIFS/SMB and NFS protocols – was happily chugging along the 100 Mbit/sec network, and occasionally getting sucked into the arguments about why SAN was better than NAS. I was already heavily dipped into NFS, because I was pretty much a SunOS/Solaris bigot back then.

When I joined NetApp in Malaysia in 2000, that NAS-SAN wars were going on, waiting for me. NetApp (or Network Appliance as it was known then) was trying to grow beyond its dot-com roots, into the enterprise space and guys like EMC and HDS were frequently trying to put NetApp down.

It’s a toy”  was the most common jibe I got in regular engagements until EMC suddenly decided to attack Network Appliance directly with their EMC CLARiiON IP4700. EMC guys would fondly remember this as the “NetApp killer“. Continue reading

Why demote archived data access?

We are all familiar with the concept of data archiving. Passive data gets archived from production storage and are migrated to a slower and often, cheaper storage medium such tapes or SATA disks. Hence the terms nearline and offline data are created. With that, IT constantly reminds users that the archived data is infrequently accessed, and therefore, they have to accept the slower access to passive, archived data.

The business conditions have certainly changed, because the need for data to be 100% online is becoming more relevant. The new competitive nature of businesses dictates that data must be at the fingertips, because speed and agility are the new competitive advantage. Often the total amount of data, production and archived data, is into hundred of TBs, even into PetaBytes!

The industries I am familiar with – Oil & Gas, and Media & Entertainment – are facing this situation. These industries have a deluge of files, and unstructured data in its archive, and much of it dormant, inactive and sitting on old tapes of a bygone era. Yet, these files and unstructured data have the most potential to be explored, mined and analyzed to realize its value to the organization. In short, the archived data and files must be democratized!

The flip side is, when the archived files and unstructured data are coupled with a slow access interface or unreliable storage infrastructure, the value of archived data is downgraded because of the aggravated interaction between access and applications and business requirements. How would organizations value archived data more if the access path to the archived data is so damn hard???!!!

An interesting solution fell upon my lap some months ago, and putting A and B together (A + B), I believe the access path to archived data can be unbelievably of high performance, simple, transparent and most importantly, remove the BLOODY PAIN of FILE AND DATA MIGRATION!  For storage administrators and engineers familiar with data migration, especially if the size of the migration is into hundreds of TBs or even PBs, you know what I mean!

I have known this solution for some time now, because I have been avidly following its development after its founders left NetApp following their Spinnaker venture to start Avere Systems.

avere_220

Continue reading

No Flash in the pan

The storage networking market now is teeming with flash solutions. Consumers are probably sick to their stomach getting a better insight which flash solution they should be considering. There are so much hype, fuzz and buzz and like a swarm of bees, in the chaos of the moment, there is actually a calm and discerning pattern slowly, but surely, emerging. Storage networking guys would probably know this thing well, but for the benefit of the other readers, how we view flash (and other solid state storage) becomes clear with the picture below: Flash performance gap

(picture courtesy of  http://electronicdesign.com/memory/evolution-solid-state-storage-enterprise-servers)

Right at the top, we have the CPU/Memory complex (labelled as Processor). Our applications, albeit bytes and pieces of them, run in this CPU/Memory complex.

Therefore, we can see Pattern #1 showing up. Continue reading

HDS HNAS kicks ass

I am dusting off the cobwebs of my blog. After almost 3 months of inactivity, (and trying to avoid the Social Guidelines Media of my present company), I have bolstered enough energy to start writing again. I am tired, and I am finishing off the previous engagements prior to joining HDS. But I am glad those are coming to an end, with the last job in Beijing next week.

So officially, I will be in HDS as of November 4, 2013 . And to get into my employer’s good books, I think I should start with something that HDS has proved many critics wrong. The notion that HDS is poor with NAS solutions has been dispelled with a recent benchmark report from SPECSfs, especially when it comes to NFS file performance. HDS has never been much of a big shouter about their HNAS, even back in the days of OEM with BlueArc. The gap period after the BlueArc acquisition was also, in my opinion, quiet unless it was the gestation period for this Kick-Ass announcement a couple of weeks ago. Here is one of the news circling in the web, from the ever trusty El-Reg.

HDS has never been big shouting like the guys, like EMC and NetApp, who have plenty of marketing dollars to spend. EMC Isilon and NetApp C-Mode have always touted their mighty SPECSfs numbers, usually with a high number of controllers or nodes behind the benchmarks. More often than not, many readers would probably focus more on the NFSops/sec figures rather than the number of heads required to generate the figures.

Unaware of this HDS announcement, I was already asking myself that question about NFSops/sec per SINGLE controller head. So, on September 26 2013, I did a table comparing some key participants of the SPECSfs2008_nfs.v3 and here is the table:

SPECSfs2008_nfs.v3-26-Sept-2013In the last columns of the 2 halves (which I have highlighted in Red), the NFSops/sec/single controller head numbers are shown. I hope that readers would view the performance numbers more objectively after reading this. Therefore, I let you make your own decisions but ultimately, they are what they are. One should not be over-mesmerized by the super million NFSops/sec until one looks under the hood. Secondly, one should also look at things more holistically such as $/NFSops/sec, $/ORT (overall response time), and $/GB/NFSops/managed and other relevant indicators of the systems sold.

But I do not want to take the thunder away from HDS’ HNAS platforms in this recent benchmark. In summary,

HDS SPECbench summaryTo reach a respectable number of 607,647 NFSops/sec with a sub-second response time is quite incredible. The ORT of 0.59 msecs should not be taken lightly because to eke just about a 0.1 msec is not easy. Therefore, reaching 0.5 millisecond is pretty awesome.

This is my first blog after 3 months. I am glad to be back and hopefully with the monkey off my back (I am referring to my outstanding engagements), I can concentrating on writing good stuff again. I know, I know … I still owe some people some entries. It’s great to be back 🙂

We raid vRAID

I took a bit of time off to read through Violin’s vRAID technology because I realized that vRAID (other than Violin’s vXM architecture) is the other most important technology that differentiates Violin Memory from the other upstarts. I blogged at a high-level about Violin a few entries ago, and we are continuing Violin impressive entrance with a storage technology that have been around for almost 25 years – RAID. Incidentally, I found this picture of the original RAID paper (see below):

Has RAID evolved with solid state storage? Evidently, no, because I have not read of any vendors (so far) touting any RAID revolution in their solid state offerings. There has been a lot of negative talks about RAID, but RAID has been the cornerstone and the foundation of storage ever since the beginning. But with the onslaughts of very large capacity HDDs, the demands of packing more bits-per-inch and the insatiable needs for reliability, RAID is slowly beginning to show its age. Cracks in the armour, I would say. And there are many newer, slightly more refined versions of RAID, from the Network RAID-style of HP P4000 or the Dell EqualLogic, to the RAID-X of IBM XIV, to innovations of declustered RAID in Panasas. (Interestingly, one of the early founders of the actual RAID concept paper, Garth Gibson, is the founder of Panasas).

And the new vRAID from Violin-System doesn’t sway much from the good ol’ RAID, but it has been adapted to address the issues of Solid State Devices.

Solid State devices (notably NAND Flash since everyone is using them) are very different from the usual spinning disks of HDDs. They behave differently and pairing solid state devices with the present implementations of RAID could be like mixing oil and water. I am not saying that the present RAID cannot work with solid state devices, but has RAID adapted to the idiosyncrasies of Flash?

It is like putting an old crank shaft into a new car. It might work for a while, but in the long run, it could damage the car. Similarly, conventional RAID might have detrimental performance and availability impact with solid state devices. And we have hardly seen storage vendors coming up to say that their RAID technology has been adapted to the solid state devices that they are selling. This silence could likely mean that they are just adapting to market requirements and not changing their RAID codes very much to take advantage of Flash or other solid state storage for that matter. Violin Memory has boldly come forward to meet that requirement and vRAID is their answer.

Violin argues that there are bottlenecks at the external RAID controller or software RAID level as well as use of legacy disk drive interfaces. And this is indeed true, because this very common RAID implementation squeezes performance at the expense of the other components such as CPU cycles.

Furthermore, there are plenty of idiosyncrasies in Flash with things such as erase-first, then write mechanism. The nature of NAND Flash, unlike DRAM, requires a block to be erased first before a write to the block is allowed. It does not “modify” per se, where the operations of read-modify-write is often applied in parity-based RAIDs of 5 and 6. Because of this nature, it is more like read-erase-write, and when the erase of the block is occurring, the read operation is stalled. That is why most SSDs will have impressive read latency (in microseconds), but very poor writes (in milliseconds). Furthermore, the parity-based RAID’s write penalty, can further aggravate the situation when the typical RAID technology is applied to NAND Flash solid state storage.

As the blocks in the NAND Flash build up, the accumulation of read-erase-write will not only reduce the lifespan of the blocks in the NAND Flash, it will also reduce the IOPS to a state we called Normalized Steady State. I wrote about this in my blog, “Not all SSDs are the same” some moons ago. In my blog, SNIA Solid State Storage Performance Testing Suite (SSS-PTS), there were 3 distinct phases of a typical NAND Flash SSD:

  • Fresh of out the Box (FOB)
  • Transition
  • Steady State
This performance degradation is part of what vendors call “Write Cliff”, where there is a sudden drop in IOPS performance as the NAND Flash SSD ages. Here’s a graph that shows the performance drop.
Violin’s vRAID, implemented within its switched vXM architecture itself, and using proprietary high performance flash controllers and the flash-optimized vRAID technology, is able deliver sustained IOPS throughout the lifespan of the flash SSD, as shown below:
To understand vRAID we have to understand the building blocks of the Violin storage array. NAND Flash chips of 4GB are packed into a Flash Package of 8 giving it 32GB. And 16 of these 32GB Flash Package are then consolidated into a 512GB VIMM (Violin Inline Memory Module). The VIMM is the starting block and can be considered as a “disk”, since we are used to the concept of “disk” in the storage networking world. 5 of these VIMMs will create a RAID group of 4+1 (four data and one parity), giving the redundancy, performance and capacity similar to RAID-5.
The block size used is 4K block and this 4K block is striped across the RAID group with 1K pages each on each of the VIMMs in the RAID group. Each of this 1K page is managed independently and can be placed anywhere in any flash block in the VIMMs, and spread out for lowest possible latency and bandwidth. This contributes to the “spike free latency” of Violin Memory. Additionally, there is ECC protection within each 1K page to correct flash bit error.
To protect against metadata corruption, there is an additional, built-in RAID Check bit to correct the VIMM errors. Lastly, one important feature that addresses the read-erase-write weakness of NAND Flash, the vRAID ensures that the slow erases never block a Read or a Write. This architectural feature enable spike-free latency in mixed Read/Write environments.
Here’s a quick overview of Violin’s vRAID architecture:
I still feel that we need a radical move away from the traditional RAID and vRAID is moving in the right direction to evolve RAID to meet the demands of the data storage market. Revolutionary and radical it may not be, but then again, is the market ready for anything else?
As I said, so far Violin is the only all-Flash vendor that has boldly come forward to meet the storage latency problem head-on, and they have been winning customers very quickly. Well done!

NFS-phobic in Malaysia

I taught the EMC Cloud Infrastructure and Services (CIS) class last week and naturally, a few students came from the VMware space. I asked how they were implementing their storage and everyone said Fibre Channel.

I have spoken to a lot of people about this as well in the past, whether they are using SAN or NAS storage for VMware environments. And almost 99% would say SAN, either FC-SAN or iSCSI-based SAN. Why???

When I ask these people about deploying NFS, the usual reply would be related to performance.

NFS version 3 won the file sharing protocol race during its early days where Unix variants were prevalent, but no thanks to the Balkanization of Unices in the 90s. Furthermore, NFS lost quite a bit of ground between NFSv3 in 1995 and the coming out party of NFSv4.1 just 2 years ago. The in-between years were barren and NFS become quite a bit of a joke with “Need For Speed” or “No F*king Security“. That also could be a contributing factor to the NFS-phobia we see here in Malaysia.

I have experiences with both SAN and NAS and understood the respective protocols of Fibre Channel, iSCSI, NFS and CIFS, and I felt that NFS has been given unfair treatment by people in this country. For the uninformed, NFS is the only NAS protocol supported by VMware. CIFS, the Windows file sharing protocol, is not supported, probably for performance and latency reasons. However, if you catch up with high performance computing (HPC), clustering, or MPP (Massively Parallel Processing) resources, almost always you will read about NFS being involved in delivering very high performance I/O. So, why isn’t NFS proposed with confidence in VMware environments?

I have blogged about this before. And I want to use my blog today to reassert what I believe in and hope that more consideration can be given to NFS when it comes to performance, even for virtualized environments.

NFS performance is competitive when compared to Fibre Channel and in a lot of cases, better than iSCSI. It is just that the perception of poor performance in NFS is stuck in people’s mind and it is hard to change that. However, there are multiple credible sources that stated that NFS is comparable to Fibre Channel. Let me share with you one of the source that compared NFS with other transport protocols:

From the 2 graphs of IOPS and Latency, NFS fares well against other more popular transport protocols in VMware environments. Those NFS performance numbers, are probably not RDMA driven as well. Otherwise RDMA could very well boost the NFS numbers into even higher ground.

What is this RDMA (Remote Direct Memory Access)? RDMA is already making its presence felt quietly, and being used with transports like Infiniband and 10 Gigabit Ethernet. In fact, Oracle Solaris version 11 will use RDMA as the default transmission protocol whenever there is a presence of RDMA-enable NICs in the system. The diagram below shows where RDMA fits in in the network stack.

RDMA eliminates the need for the OS to participate in the delivery of data, and directly depositing the data from the initiator’s memory to the target’s memory. This eliminates traditional networking overheads such as buffers copying and setting up network data structures for the delivery. A little comparison of RDMA with traditional networking is shown below:

I was trying to find out how prevalent NFS was in supporting the fastest supercomputers in the world from the Top500 Supercomputing sites. I did not find details of NFS being used, but what I found was the Top500 supercomputers do not employ Fibre Channel SAN at all!  Most have either proprietary interconnects with some on Infiniband and 10 Gigabit Ethernet. I would presume that NFS would figure in most of them, and I am confident that NFS can be a protocol of choice for high performance environments, and even VMware environments.

The future looks bright for NFSv4. We are beginning to see the word of “parallel NFS (pNFS)” being thrown into conversations around here, and the awareness is there. NFS version 4.2 is just around the corner as well, promising greater enhancement to the protocol.

 

Not all SSDs are the same

Happy Lunar New Year! The Chinese around world has just ushered in the Year of the Water Dragon yesterday. To all my friends and family, and readers of my blog, I wish you a prosperous and auspicious Chinese New Year!

Over the holidays, I have been keeping up with the progress of Solid State Drives (SSDs). I am sure many of us are mesmerized by SSDs and the storage vendors are touting the best of SSDs have to offer. But let me tell you one thing – you are probably getting the least of what the best SSDs have to offer. You might be puzzled why I say things like this.

Let me share with a common sales pitch. Most (if not all) storage vendors will tout performance (usually IOPS) as the greatest benefits of SSDs. The performance numbers have to be compared to something, and that something is your regular spinning Hard Disk Drives (HDDs). The slowest SSDs in terms of IOPS is about 10-15x faster than the HDDs. A single SSD can at least churn 5,000 IOPS when compared to the fastest 15,000 RPM HDDs, which churns out about 200 IOPS (depending on HDD vendors). Therefore, the slowest SSDs can be 20-25x faster than the fastest HDDs, when measured in IOPS.

But the intent of this blogger is to share with you more about SSDs. There’s more to know because SSDs are not built the same. There are write-bias SSDs, read-bias SSDs; there are SLC (single level cell) and MLC (multi level cell) SSDs and so on. How do you differentiate them if Vendor A touts their SSDs and Vendor B touts their SSDs as well? You are not comparing SSDs and HDDs anymore. How do you know what questions to ask when they show you their performance statistics?

SNIA has recently released a set of methodology called “Solid State Storage (SSS) Performance Testing Specifications (PTS)” that helps customers evaluate and compare the SSD performance from a vendor-neutral perspective. There is also a whitepaper related to SSS PTS. This is something very important because we have to continue to educate the community about what is right and what is wrong.

In a recent webcast, the presenters from the SNIA SSS TWG (Technical Working Group) mentioned a few questions that I  think we as vendors and customers should think about when working with an SSD sales pitch. I thought I share them with you.

  • Was the performance testing done at the SSD device level or at the file system level?
  • Was the SSD pre-conditioned before the testing? If so, how?
  • Was the performance results taken at a steady state?
  • How much data was written during the testing?
  • Where was the data written to?
  • What data pattern was tested?
  • What was the test platform used to test the SSDs?
  • What hardware or software package(s) used for the testing?
  • Was the HBA bandwidth, queue depth and other parameters sufficient to test the SSDs?
  • What type of NAND Flash was used?
  • What is the target workload?
  • What was the percentage weight of the mix of Reads and Writes?
  • Are there warranty life design issue?

I thought that these questions were very relevant in understanding SSDs’ performance. And I also got to know that SSDs behave differently throughout the life stages of the device. From a performance point of view, there are 3 distinct performance life stages

  • Fresh out of the box (FOB)
  • Transition
  • Steady State

 

As you can see from the graph below, a SSD, fresh out of the box (FOB) displayed considerable performance numbers. Over a period of time (the graph shown minutes), it transitioned into a mezzanine stage of lower IOPS and finally, it normalized to the state called the Steady State. The Steady State is the desirable test range that will give the most accurate type of IOPS numbers. Therefore, it is important that your storage vendor’s performance numbers should be taken during this life stage.

Another consideration when understanding the SSDs’ performance numbers are what type of tests used? The test could be done at the file system level or at the device level. As shown in the diagram below, the test numbers could be taken from many different elements through the stack of the data path.

 

Performance for cached data would given impressive numbers but it is not accurate. File system performance will not be useful because the data travels through different layers, masking the true performance capability of the SSDs. Therefore, SNIA’s performance is based on a synthetic device level test to achieve consistency and a more accurate IOPS numbers.

There are many other factors used to determine the most relevant performance numbers. The SNIA PTS test has 4 main test suite that addresses different aspects of the SSD’s performance. They are:

  • Write Saturation test
  • Latency test
  • IOPS test
  • Throughput test

The SSS PTS would be able to reveal which is a better SSD. Here’s a sample report on latency.

Once again, it is important to know and not to take vendors’ numbers in verbatim. As the SSD market continue to grow, the responsibility lies on both side of the fence – the vendor and the customer.

 

The recipe for storage performance modeling

Good morning, afternoon, evening, Ladies & Gentlemen, wherever you are.

Today, we are going to learn how to bake, errr … I mean, make a storage performance model. Before we begin, allow me to set the stage.

Don’t you just hate it when you are asked to do storage performance sizing and you don’t have a freaking idea how to get started? A typical techie would probably say, “Aiya, just use the capacity lah!”, and usually, they will proceed to size the storage according to capacity. In fact, sizing by capacity is the worst way to do storage performance modeling.

Bear in mind that storage is not a black box, although some people wished it was. It is not black magic when it comes to performance sizing because things can be applied in a very scientific and logical manner.

SNIA (Storage Networking Industry Association) has made a storage performance modeling methodology (that’s quite a mouthful), and basically simplified it into these few key ingredients. This recipe is for storage performance modeling in general and I am advising you guys out there to engage your storage vendors professional services. They will know their storage solutions best.

And I am going to say to you – Don’t be cheap and not engage professional services – to get to the experts out there. I was having a chat with an consultant just now at McDonald’s. I have known this friend of mine for about 6-7 years now and his name is Sugen Sumoo, the Director of DBORA Consulting. They specialize in Oracle and database performance tuning and performance forecasting and it is something that a typical DBA can’t do, because DBORA Consulting is the Professional Service that brings expertise and value to Oracle customers. Likewise, you have to engage your respective storage professional services as well.

In a cook book or a cooking show, you are presented with the ingredients used and in this recipe for storage performance modeling, the ingredients (in no particular order) are:

  • Application block size
  • Read and Write ratio
  • Application access patterns
  • Working set size
  • IOPS or throughput
  • Demand intensity

Application Block Size

First of all, the storage is there to serve applications. We always have to look from the applications’ point of view, not storage’s point of view.  Different applications have different block size. Databases typically range from 8K-64K and backup applications usually deal with larger block sizes. Video applications can have 256K block sizes or higher. It all depends.

The best way is to find out from the DBA, email administrator or application developers. The unfortunate thing is most so-called technical people or administrators in Malaysia doesn’t have a clue about the applications they manage. So, my advice to you storage professionals, do your research on the application and take the default value. These clueless fellas are likely to take the default.

Read and Write ratio

Applications behave differently at different times of the day, and at different times of the month (no, it’s not PMS). At the end of the financial year or calendar, there are some tasks that these applications do as well. But in a typical day, there are different weightage or percentage of read operations versus write operations.

Most OLTP (online transaction processing)-based applications tend to be read heavy and write light, but we need to find out the ratio. Typically, it can be a 2:1 ratio or 60%:40%, but it is best to speak to the application administrators about the ratio. DSS (Decision Support Systems) and data warehousing applications could have much higher reads than writes while a seismic-analysis applications can have multiple writes during the analysis periods. It all depends.

To counter the “clueless” administrators, ask lots of questions. Find out the workflow of several key tasks and ask what that particular tasks do at different checkpoints of the application’s processing. If you are lazy (please don’t be lazy, because it degrades your value as a storage professional), use a rule of thumb.

Application access patterns

Applications behave differently in general. They can be sequential, like backup or video streaming. They can be random like emails, databases at certain times of the day, and so on. All these behavioral patterns affect how we design and size the disks in the storage.

Some RAID levels tend to work well with sequential access and others, with random access. It is not difficult to find out about the applications’ pattern and if you read more about the different RAID-levels in storage, you can easily identify the type of RAID levels suitable for each type of behavioral patterns.

Working set size

This variable is a bit more difficult to determine. This means that a chunk of the application has to be loaded into a working area, usually memory and cache memory, to be used and abused by the application users.

Unless someone is well versed with the applications, one would not be able to determine how much of the applications would be placed in memory and in cache memory. Typically, this can only be determined after the application has been running for some time.

The flexibility of having SSDs, especially the DRAM-type of SSDs, are very useful to ensure that there is sufficient “working space” for these applications.

IOPS or Throughput

According to SNIA model, for I/O less than 64K, IOPS should be used as a yardstick to do storage performance modeling. Anything larger, use throughput, in which MB/sec is the measurement unit.

The application guy would be able to tell you what kind of IOPS their application is expecting or what kind of throughput they want. Again, ask a lot of questions, because this will help you determine the type of disks and the kind of performance you give to the application guys.

If the application guy is clueless again, ask someone more senior or ask the vendor. If the vendor engineers cannot give you an answer, then they should not be working for the vendor.

Demand intensity

This part is usually overlooked when it comes to performance sizing. Demand intensity refers to how intense is the I/O requests. It could come from 1 channel or 1 part of the applications, or it could come from several parts of the applications in parallel. It is as if the storage is being ‘bombarded’ by applications and this is the part that is hard to determine as well.

In some applications, the degree of intensity or parallelism can be tuned and to find out, ask the application administrator or developer. If not, ask the vendor. Also do a lot of research on the application’s architecture.

And one last thing. What I have learned is to add buffers to the storage performance model. Typically I would add about 10-20% extra but you never know. As storage professionals, I would strongly encourage to engage professional services, because it is worthwhile, especially in the early stages of the sizing. It is usually a more expensive affair to size it after the applications have been installed and running.

“Failure to plan is planning to fail”.  The recipe isn’t that difficult. Go figure it out.

NetApp SPECSfs record broken in 13 days


Thanks for my buddy, Chew Boon of HDS who put me on alert about the new leader of the SPECSfs benchmark results. NetApp “world record” has been broken 13 days later by Avere Systems.

Avere has posted the result of 1,564,404 NFS ops/sec with an ORT (overall response time) of 0.99 msec. This benchmark was done by 44 nodes, using 6.808 TB of memory, with 800 HDDs.

Earlier this month, NetApp touted fantastic results and quickly came out with a TR comparing their solution with EMC Isilon. Here’s a table of the comparison

 

But those numbers are quickly made irrelevant by Avere FXT, and Avere claims to have the world record title with the “smallest footprint ever”. Here’s a comparison in Avere’s blog, with some photos to boot.

 

For the details of the benchmark, click here. And the news from PR Newswire.

If you have not heard of Avere, they are basically the core team of Spinnaker. NetApp acquired Spinnaker in 2003 to create the clustered file systems from the Spinnaker technology. The development and integration of Spinnaker into NetApp’s Data ONTAP took years and was buggy, and this gave the legroom for competitors like Isilon to take market share in the clustered NAS/scale-out NAS landscape.

Meanwhile, NetApp finally came did come good with the Spinnaker technology and with ONTAP 8.0.1 and 8.1, the codes of both platforms merged into one.

The Spinnaker team went on develop a new technology called the “A-3 Architecture” (shown below) and positioned itself as a NAS Accelerator.

avere-nas-1

The company has 2 series of funding and now has a high performance systems to compete with the big boys. The name, Avere Systems, is still pretty much unknown in this part of the world and this “world record” will help position them stronger.

But as I have said before, benchmarking are just ways to have bigger bragging rights. It is a game of leapfrogging, and pretty soon this Avere record will be broken. It is nice while it lasts.