NFO for DFR

It has not caused severe pain yet but it will. Storage is cheap but as capacity grows, it will eventually hit a limit that makes storage difficult to maintain from a cost perspective.

I wrote about the lack of attention of primary storage deduplication solutions in the local industry. Perhaps deduplication has matured to a point that it has become a no-brainer or perhaps customers are already getting sick and tired of the word “dedupe”. Either way, we should not be distracted from the fact that data footprint reduction (DFR) in a generic sense or storage efficiency as a fancy marketing term, must be applied somewhere to slow down the purchase of storage capacity.

Storage is getting fatter, and storage vendors’ revenue is getting fatter along with it. While this is good for the pockets of vendors, the customers have to face higher costs associated with

  • Power, Cooling and Floorspace
  • Administration and management
  • Bandwidth
  • Resource utilization

All these are not prudent storage management practices, because fat storage is bad, just like human beings getting fatter. Similarly, storage must go on a diet and deduplication is one of the few solutions out there. However, I have spoken out that deduplication is just shrinking the container that holds the bits of data, completely unaware of what the content is. Deduplication does not shrink the data itself, and if the occurrence of the data is high, deduplication does not help in reducing the storage capacity. There is no advantage unless the data footprint reduction (DFR) technology is content aware. (Note that I am using DFS as a generic term rather than data deduplication. The reason is obvious.)

That is why data deduplication technologies does not work well with seismic files or encoded video files, because the files are already highly optimized. But there is a technology that can look deeper into such unstructured files and produce storage capacity reduction with specific algorithms for specific type of files and file objects. That technology, I believe, is the truest form of data footprint reduction and it is called Native Format Optimization (NFO).

I want to relate an old story I had experienced when I brought an EMC BURA (Back Up Recovery & Archive – a precursor to its present BRS division) senior manager to see a highly respected technical manager in Schlumberger in Malaysia a few years back. Schlumberger is the world’s largest oilfield services company and provides seismic analysis and interpretation software and seismic files are highly encoded and compressed.

As usual, the senior manager being a typical sales guy started blabbering how great Data Domain (this was just after the EMC acquisition) was, and how it can dedupe any kind of files giving 20:1 (exaggerated to 500:1 to certain text files), even for seismic files. I was signalling to the EMC senior manager to stop his bullsh*t, but he went on and on. In the end, the Schlumberger technical manager politely told the EMC senior manager to shut up, because he has little understand of what seismic files are like.

Now, back to Native Format Optimization (NFO) technology. In a nutshell, NFO plays trick with our human visual system. The goal is to reduce the size of unstructured files without reducing the visual quality of the images (text, texture, colour, resolution, depth, hue, contrast, etc) of the files. 

Have a look at these 2 files. One is optimized with NFO and one is un-optimized. Can you tell the difference?

 

The human visual system is known to be:

  • Less sensitive to high frequency of colour variation
  • More sensitive to brightness than colour variation
  • Less sensitive to background colour in lower resolution
  • More sensitive to a picture’s motion than picture’s texture

Therefore, the eyes perceive an image based on mostly the lowest quality baseline. I got this information from George Crump’s Storage Switzerland’s article.

Because NFO is already in its native form, the files does not need to be rehydrated like deduped files.

The capacity reduction savings is tremendous and because NFO approach is content aware, the benefits translates to higher cost savings in

  • Reduction of power, cooling and floorspace
  • Reduction in data management and administration tasks, especially backup
  • Improved bandwidth and improved disaster recovery
  • Higher performance
  • Delayed storage capacity purchase
  • many more

After Ocarina acquisition by Dell in 2010, a search on the web revealed that probably only one vendor in Europe has boldly continued to enhance NFO technology in their products. The company is balesio and you can read about their NFO technology here.

 

Linking Apple to SAN

Serendipity is what I would describe this encounter. I was introduced to Promise Technology Channel Sales Director early this week. When I saw his face, I realized that I already knew him, a Malaysian who previously worked at EMC in Taiwan, but has been residing in that country for many years. We laughed and joked like old buddies and hence, the story begins …

I have known of Promise through its popular VTrak storage, which many Apple users here ignorantly associate as an Apple product. Here it is, appearing on Apple’s website:

Yes, that’s the 3rd picture from your left.

Another very strong Apple product from Promise Technology is Pegasus, a storage line of direct-attached storage (DAS) sporting the Thunderbolt, a very fast interface that connects peripherals to Macs through its Mini DisplayPort. I found this strange having a graphics display port being used to connect to a storage device, but as I looked deeper into Thunderbolt, I found that the technology was meant to extend the PCIe bus with the DisplayPort into a conduit that delivers high throughput serially. Impressive!

The picture below shows the Thunderbolt link connections, in which
Intel will provide two types of Thunderbolt controllers, a 2 port type and a 1 port type.

But Thunderbolt is not a network-based technology. It is channel-based, and hence, connecting to a Fibre Channel SAN is like mixing oil with water. Apple is not known for accessing storage through Fibre Channel, and since Apple products do not have a Fibre Channel interface, Promise Technology has come up with Thunderbolt to Fibre Channel converter. They call it SANLink. And the picture below shows how it is done:

The SANLink can also be daisy-chained. In the example below, the Pegasus DAS is daisy-chained to a SANLink which then extends and expands its capacity from the Fibre Channel connected VTrak Ex30 or Ex10.

The connectivity can be 8Gbps for the VTrak Ex30, and 4Gbps for the Ex10, and it has been validated to work with MacOS X, Final Cut Studio and Apple’s Xsan.

This is targeted to the Apple’s presence in the video editing and video production environment. I have 1 customer using our storage for their Apple file sharing purpose, and I realized that these guys work in isolation most of the time. They are like a sheep-shearing house, taking one job, work on it a bit and then pass it on to another colleague for the next stage in the video production process. Sharing is clearly not well known in this type of environment. And Promise wants to change that by opening to those hermit-like video editors and producers to share and collaborate in their work.

I don’t know much about other vendors besides Apple that pushes the Thunderbolt technology. It is very high performance interface, capable of delivering 20Gbps but I am afraid that Thunderbolt may suffer from the Apple-only syndrome.

Apple tend to be very cutting-edge when it comes to most technologies that go into their products. That makes Apple high risk takers, and that puts Thunderbolt into that risky category when if Apple fails, Thunderbolt fails. So far, I have not seen Thunderbolt spreading like wildfire, but opening Apple to SAN, both iSCSI and Fibre Channel, is good. It is time Apple embrace more of the storage networking technologies and standards out there, rather than being steadfast with their proprietary implementation of storage. Apple File Protocol (AFP) and Thunderbolt (for now) comes to mind. It is good to be stubborn but …

Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications  will have implications on the brand of storage they will be locked in into.

And surprise, surprise, SSDs are the pawns of this new cloak-and-dagger game. How? Well, I have been observing this for quite a while now, and when HP announced their SMART portfolio for their storage, it’s time for me to say something.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.

It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

Similarly, Dell announced their ExpressFlash solution that ties up its 12th generation PowerEdge servers with their flagship (what else), Dell Compellent storage.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

 

Gartner WW ECB 4Q11

The Gartner Worldwide External Controller Based Disk Storage market numbers were out last night, and perennially follows IDC Disk Storage System Tracker.

The numbers posted little surprise, after a topsy-turvy year for vendors like IBM, HP and especially NetApp. Overall, the positions did not change much, but we can see that the 3 vendors I mentioned are facing very challenging waters ahead. Here’s a look at the overall 2011 numbers:

EMC is unstoppable, and gaining 3.6% market share and IBM lost 0.2% market share despite having strong sales with their XIV and StorWize V7000 solutions. This could be due to the lower than expected numbers from their jaded DS-series. IBM needs to ramp up.

HP stayed stagnant, even though their 3PAR numbers have been growing well. They were hit by poor numbers from the EVA (now renumbered as P6000s), and surprisingly their P4000s as well. Looks like they are short-lefthanded (pun intended) and given the C-level upheavals it went through in the past year, things are not looking good for HP.

Meanwhile, Dell is unable to shake off their EMC divorce alimony, losing 0.8% market share. We know that Dell has been pushing very, very hard with their Compellent, EqualLogic, and other technologies they acquired, but somehow things are not working as well yet.

HDS has been the one to watch, with its revenue numbers growing in double digits like NetApp and EMC. Their market share gain was 0.6%, which is very good for HDS standards. NetApp gained 0.8% market share but they seem vulnerable after 2 poor quarters.

The 4th quarter for 2011 numbers are shown below:

I did not blog about IDC QView numbers, which reports the storage software market share but just to give this entry a bit of perspective from a software point of view. From the charts of The Register, EMC has been gaining marketshare at the expense of the rest of the competitors like Symantec, IBM and NetApp.

Tabulated differently, here’s another set of data:

On all fronts, EMC is firing all cylinders. Like a well-oiled V12 engine, EMC is going at it with so much momentum right now. Who is going to stop EMC?

Chink in NetApp MetroCluster?

Ok, let me clear the air about the word “Chink” (before I get into trouble), which is not racially offensive unlike the news about ESPN having to fire 2 of their employees for using the word “Chink” on Jeremy Lin.  According to my dictionary (Collins COBUILD), chink is a very narrow crack or opening on a surface and I don’t really know the derogatory meaning of “chink” other than the one in my dictionary.

I have been doing a spot of work for a friend who has just recently proposed NetApp MetroCluster. When I was at NetApp many years ago, I did not have a chance to get to know more about the solution, but I do know of its capability. After 6 years away, coming back to do a bit of NetApp was fun for me, because I was always very comfortable with the NetApp technology. But NetApp MetroCluster, and in this opportunity, NetApp Fabric MetroCluster presented me an opportunity to get closer to the technology.

I have no doubt in my mind, this is one of the highest available storage solutions in the market, and NetApp is not modest about beating its own drums. It touts “No SPOF (Single Point of Failure“, and rightly so, because it has put in all the right plugs for all the points that can fail.

NetApp Fabric MetroCluster is a continuous availability solution that stretches over 100km. It is basically a NetApp Cluster with mirrored storage but with half of  its infrastructure mirror being linked very far apart, over Fibre Channel components and dark fiber. Here’s a diagram of how NetApp Fabric Metrocluster works for a VMware FT (Fault Tolerant) environment.

There’s a lot of simplicity in the design, because when I started explaining it to the prospect, I was amazed how easy it was to articulate about it, without all the fancy technical jargons or fuzz. I just said … “imagine a typical cluster, with an interconnect heartbeat, and the storage are mirrored. Then imagine the 2 halves are being pulled very far apart … That’s NetApp Fabric MetroCluster”. It was simply blissful.

But then there were a lot of FUDs (fear, uncertainty, doubt) thrown in by the competitor, feeding the prospect with plenty of ammunition. Yes, I agree with some of the limitations, such as no SATA support for now. But then again, there is no perfect storage solution. In fact, Chris Mellor of The Register wrote about God’s box, the perfect storage, but to get to that level, be prepared to spend lots and lots of money! Furthermore, once you fix one limitation or bottleneck in one part of the storage, it introduces a few more challenges here and there. It’s never ending!

Side note: The conversation triggered the team to check with NetApp for SATA support in Fabric MetroCluster. Yes, it is already supported in ONTAP 8.1 and the present version is 8.1RC3. Yes, SATA support will be here soon. 

More FUDs as we went along and when I was doing my research, some HP storage guys on the web were hitting at NetApp MetroCluster. Poor HP! If you do a search of NetApp MetroCluster, I am sure you will come across these 2 HP blogs in 2010, deriding the MetroCluster solution. Check out this and the followup on the first blog. What these guys chose to do was to break the MetroCluster apart into 2 single controllers after a network failure, and attack it from that level.

Yes, when you break up the halves, it is basically a NetApp system with several single point of failure (SPOF). But then again, who isn’t? Almost every vendor’s storage will have some SPOFs when you break the mirror.

Well, I can tell you is, the weakness of NetApp MetroCluster is, it’s not continuous data protection (CDP). Once your applications have written garbage on one volume, the garbage is reflected on the mirrored volume. You can’t roll back and you live with the data corruption. That is why storage vendors, including NetApp, offer snapshots – point-in-time copies where you can roll back to the point before the data corruption occurred. That is why CDP gives the complete granularity of recovery in every write I/O and that’s something NetApp does not have. That’s NetApp’s MetroCluster weakness.

But CDP is aimed towards data recovery, NOT data availability. It is focused on customers’ whose requirements are ability to get the data back to some usable state or form after the event of a disaster (big or small), while the MetroCluster solution is focused on having the data available all the time. They are 2 different set of requirements. So, it depends on what the customer’s requirement is.

Then again, come to think of it, NetApp has no CDP technology of their own … isn’t it?

Quest Software going private

Just a couple of days ago, Quest Software Inc, got an offer from Insight Venture partners. The offer of USD$23 per share will bring the offer close to USD$2 billion, and the company will be taken private.

This is the second big-name taken off and going private. The first one being BlueCoat after it has agreed to be take private for a price of USD$1.3 billion by Thoma Bravo, a private equity firm.

Quest Software is the maker of the famous Oracle performance analyzer, Toad and also has acquired smaller companies like Bakbone and Vizioncore in the past, but this around it has become the acquisition target.

This brings a very interesting fact, that, more and more public companies are going private. Here in home ground, the largest mobile carrier, Maxis, went private a few years ago.

Why? Typically most companies go private when the shareholders think that the stock market does not give the company share the right value. The market perhaps has stagnated and not growing. However, BlueCoat and Quest Software are not in a stagnant market. Security, application acceleration, data protection and data analytics are big market in the cusp of exploding growth. Then why are these companies going private?

Here are a few possible reasons (my take):

  1. With the buy-out, these companies can be free from the encumbrances that come with being a public company. Some of them include lengthy approvals from shareholders, board of directors and regulators, which could slow the decision-making process
  2. These new owners are looking at plans to expand into markets that they can’t get to globally without being scrutinized by the regulators and certain shareholders. Going private mean that they could offer their services across the globe in the cloud space, with lesser restrictions and prohibitions.
  3. They want to be really aggressive and being public just bogs them down.
  4. The new owners plan to “shoeshine” these lackluster companies and hoping to sell them out again to get a huge profit.

Thoma Bravo, for example, already has several companies in its security portfolio – Entrust, Hyland Software, SonicWall and TripWire and the BlueCoat acquisition just adds more to its “great view of security“. Thoma Bravo, as described, is a technology investment firm specializing in revamping and growing established companies.

Insight Venture Partners (IVP), too, is in the business of private equity and venture capital, and has invested in companies such as Solarwinds, Acronis and DataCore.

This Quest Software acquisition could IVP’s biggest yet, but the question remains. Why?

The marriage in the cloud

Admit it! You are a terabyte junkie! I am sure many of us have one terabyte or more of your personal “stuff” at home. Heck, I even heard from a friend that he has almost 20TB of high definition movies (thank you Torrent!) at home! That’s crazy!

And what the typical Malaysian consumer would do after he or she runs out of hard disk space? In KL (our beloved capital city, Kuala Lumpur), they would throng the Low Yat IT mall or extensions of it, like Digital Mall in PJ Section 14. In other towns and cities in Malaysia, PC fairs are popular, as consumers try to get the best price possible (We Malaysian are good at squeezing the max of a deal)

It is difficult for the not-so-IT-literate consumer to differentiate which brand is the best. Buffalo, Iomega, DLink, Western Digital, etc, etc. But the tides are changing, because these vendors want to tie you down for the rest of your digital life. You see, buying a small NAS for the home now comes with a big carrot, an incentive to keep you wanting for more, and yet you can’t unbind yourself from the tether once you are hooked.

Cloud storage hasn’t taken off in a big way last year. But many cloud storage vendors know there are plenty of opportunities out there but how do they get the consumers to upload their files, photos and whatever stuff they might have, to cloud storage? Ingeniously, they work together with other smaller NAS storage players and use these vendor’s product offerings as baits. They bundle a significantly large FREE capacity or data protection offering in the Cloud Storage as the carrot, and once the consumer decides to put their files in the cloud storage, boom, they are ensnared to become a long term ATM machine to the Cloud Storage Provider.

Sneaky? No? I call this good, smart marketing. You have a market of opportunities out there, but cloud storage isn’t catching on. You have small NAS vendors that is reaching out to the market of consumer, but it’s a brutal, competitive arena and margins are razor thin. It’s a win-win situation for both sides.

And this trend is catching on. When I first read about Drobo (a high-end consumer NAS storage) partnering Carbonite (a remote backup vendor now repackaged as a Cloud storage backup provider), I thought it was a pretty darn good idea. It was a marriage that happened in the cloud. Late last year, another consumer NAS company, QNAP paired up with Symform, a cloud storage and backup vendor.

This was moving towards a market that scratches the itch. The consumers wanted reliable backup too, but consumer-grade disk drives fail ever so often. Laptops get stolen, and files could be infected by viruses. The list goes on, but the point is that the Cloud Storage Providers may have found a silver lining in getting the consumers to leap into the cloud. And the whole idea of small NAS vendor-big Cloud Backup dynamic duo, just got a big endorsement last night. Guess who has decided to dip its grubby hands into the pie?

EMC, the 800-pound gorilla of the information and storage world, through its Iomega subsidiary, wants your money! They had just married Iomega with EMC Atmos. It was quoted:

“EMC subsidiary and data protection specialist Iomega announced the integration between Iomega network storage solutions and EMC Atmos, extending Atmos cloud-based data protection and sharing to Iomega’s network storage product offerings. The new integration gives small and midsize businesses (SMBs), remote offices and distributed enterprises access to any Atmos powered cloud around the world.”

Surprised? Not really, but I guess EMC needs to breath new life into Atmos and this marriage just extended Atmos’ life support system.

We raid vRAID

I took a bit of time off to read through Violin’s vRAID technology because I realized that vRAID (other than Violin’s vXM architecture) is the other most important technology that differentiates Violin Memory from the other upstarts. I blogged at a high-level about Violin a few entries ago, and we are continuing Violin impressive entrance with a storage technology that have been around for almost 25 years – RAID. Incidentally, I found this picture of the original RAID paper (see below):

Has RAID evolved with solid state storage? Evidently, no, because I have not read of any vendors (so far) touting any RAID revolution in their solid state offerings. There has been a lot of negative talks about RAID, but RAID has been the cornerstone and the foundation of storage ever since the beginning. But with the onslaughts of very large capacity HDDs, the demands of packing more bits-per-inch and the insatiable needs for reliability, RAID is slowly beginning to show its age. Cracks in the armour, I would say. And there are many newer, slightly more refined versions of RAID, from the Network RAID-style of HP P4000 or the Dell EqualLogic, to the RAID-X of IBM XIV, to innovations of declustered RAID in Panasas. (Interestingly, one of the early founders of the actual RAID concept paper, Garth Gibson, is the founder of Panasas).

And the new vRAID from Violin-System doesn’t sway much from the good ol’ RAID, but it has been adapted to address the issues of Solid State Devices.

Solid State devices (notably NAND Flash since everyone is using them) are very different from the usual spinning disks of HDDs. They behave differently and pairing solid state devices with the present implementations of RAID could be like mixing oil and water. I am not saying that the present RAID cannot work with solid state devices, but has RAID adapted to the idiosyncrasies of Flash?

It is like putting an old crank shaft into a new car. It might work for a while, but in the long run, it could damage the car. Similarly, conventional RAID might have detrimental performance and availability impact with solid state devices. And we have hardly seen storage vendors coming up to say that their RAID technology has been adapted to the solid state devices that they are selling. This silence could likely mean that they are just adapting to market requirements and not changing their RAID codes very much to take advantage of Flash or other solid state storage for that matter. Violin Memory has boldly come forward to meet that requirement and vRAID is their answer.

Violin argues that there are bottlenecks at the external RAID controller or software RAID level as well as use of legacy disk drive interfaces. And this is indeed true, because this very common RAID implementation squeezes performance at the expense of the other components such as CPU cycles.

Furthermore, there are plenty of idiosyncrasies in Flash with things such as erase-first, then write mechanism. The nature of NAND Flash, unlike DRAM, requires a block to be erased first before a write to the block is allowed. It does not “modify” per se, where the operations of read-modify-write is often applied in parity-based RAIDs of 5 and 6. Because of this nature, it is more like read-erase-write, and when the erase of the block is occurring, the read operation is stalled. That is why most SSDs will have impressive read latency (in microseconds), but very poor writes (in milliseconds). Furthermore, the parity-based RAID’s write penalty, can further aggravate the situation when the typical RAID technology is applied to NAND Flash solid state storage.

As the blocks in the NAND Flash build up, the accumulation of read-erase-write will not only reduce the lifespan of the blocks in the NAND Flash, it will also reduce the IOPS to a state we called Normalized Steady State. I wrote about this in my blog, “Not all SSDs are the same” some moons ago. In my blog, SNIA Solid State Storage Performance Testing Suite (SSS-PTS), there were 3 distinct phases of a typical NAND Flash SSD:

  • Fresh of out the Box (FOB)
  • Transition
  • Steady State
This performance degradation is part of what vendors call “Write Cliff”, where there is a sudden drop in IOPS performance as the NAND Flash SSD ages. Here’s a graph that shows the performance drop.
Violin’s vRAID, implemented within its switched vXM architecture itself, and using proprietary high performance flash controllers and the flash-optimized vRAID technology, is able deliver sustained IOPS throughout the lifespan of the flash SSD, as shown below:
To understand vRAID we have to understand the building blocks of the Violin storage array. NAND Flash chips of 4GB are packed into a Flash Package of 8 giving it 32GB. And 16 of these 32GB Flash Package are then consolidated into a 512GB VIMM (Violin Inline Memory Module). The VIMM is the starting block and can be considered as a “disk”, since we are used to the concept of “disk” in the storage networking world. 5 of these VIMMs will create a RAID group of 4+1 (four data and one parity), giving the redundancy, performance and capacity similar to RAID-5.
The block size used is 4K block and this 4K block is striped across the RAID group with 1K pages each on each of the VIMMs in the RAID group. Each of this 1K page is managed independently and can be placed anywhere in any flash block in the VIMMs, and spread out for lowest possible latency and bandwidth. This contributes to the “spike free latency” of Violin Memory. Additionally, there is ECC protection within each 1K page to correct flash bit error.
To protect against metadata corruption, there is an additional, built-in RAID Check bit to correct the VIMM errors. Lastly, one important feature that addresses the read-erase-write weakness of NAND Flash, the vRAID ensures that the slow erases never block a Read or a Write. This architectural feature enable spike-free latency in mixed Read/Write environments.
Here’s a quick overview of Violin’s vRAID architecture:
I still feel that we need a radical move away from the traditional RAID and vRAID is moving in the right direction to evolve RAID to meet the demands of the data storage market. Revolutionary and radical it may not be, but then again, is the market ready for anything else?
As I said, so far Violin is the only all-Flash vendor that has boldly come forward to meet the storage latency problem head-on, and they have been winning customers very quickly. Well done!

Protogon File System

I was out shopping yesterday and I was tempted to have lunch at Bar-B-Q Plaza, a popular Thai, Japanese-style hot plate barbeque restaurant in this neck of the woods. The mascot of this restaurant is Bar-B-Gon, a dragon-like character and it is obviously a word play of barbeque and dragon.

As I was reading the news this morning about the upcoming Windows Server 8 launch, I found out that ever popular, often ridiculed NTFS (NT File System) of Windows will be going away. It will be replaced by Protogon, a codename for the new file system that Microsoft is about to release. Protogon? A word play of prototype and dragon?

The new file system, with backward compatibility with NTFS, will be called ReFS or Resilient File System. And the design objectives of what Microsoft calls “next generation” file system are clear and adept to the present day requirements. I notably mentioned present day requirements for a reason, because when I went through the key features of ReFS, the concepts and the ideas are not exactly “next generation“. Many of these features are already present with most storage vendors we know of, but perhaps for the people in the Windows world, these features might sound “next generation” to them.

ReFS, to me, is about time. NTFS has been around for a long, long time. It was first known in the wild in the 1993, and gain prominence and wide acceptance in Windows 2000 as the “enterprise-ready” file system. Indeed it was, because that was the time Microsoft Windows started its dominance into the data centers when the Unix vendors were still bickering about their version of open standards. Active Directory (AD) and NTFS were the 2 key technologies that slowly, but surely, removed Unix’s strengths in the data centers.

But over the years, as the storage networking technologies like SAN and NAS were developing and maturing, I see the NTFS being little developed to meet the strengths of these storage networking technologies and relevant protocols in the data world. When I did  a little bit of system administration on Windows (2000, 2003 notably), I could feel that NTFS was developed with direct-attached storage (DAS) or internal disks in mind. Definitely not full taking advantage of the strengths of Fibre Channel or iSCSI SAN. It was only in Windows Server 2008, that I felt Microsoft finally had enough pussyfooting with SAN and NAS, and introduced a more decent disk storage management that incorporates features that works well natively with SAN. Now, Microsoft can no longer sit quietly without acknowledging the need to build enterprise-ready technologies related to storage networking and data management. And the core in the new Microsoft Windows Server 8 engine for that is the ReFS.

One of the key technology objectives in the design of ReFS is backward compatibility. Windows has a huge market to address and they cannot just shove NTFS away. The way they did was to maintain the upper level API and file semantics and having a new core file system engine as shown in the diagram below:

ReFS is positioned with resiliency in mind. Here are a few resilient features:

  • Ability to isolate fault and perform data salvation on parts of the file system without taking the entire file system or volume offline. The goal of REFS here is to be ONLINE and serving data all the time!
  • Checksumming data and metadata for integrity. It verifies all data, and in some cases, auto-correcting corrupted data
  • Optional integrity streams that ensures protection for all forms of file-level data corruption. When enabled, whenever a file is changed, the modified copy is written to a different area of the disk than that of the original file. This way, even if the write operation is interrupted and the modified file is lost, the original file is still intact. (Doesn’t this sounds like COW with snapshots?) When combined with Storage Spaces (we will talk about this later), which can store a copy of all files in a storage array on more than one physical disk, ReFS gives Windows a way to automatically find and open an uncorrupted version of a file In the event that a file on one of the physical disks becomes corrupted. Microsoft does not recommend integrity streams for applications or systems with a specific type of storage layout or applications which want better control in the disk storage, for example databases.
  • Data scrubbing for latent disk errors. There is an tool, integrity.exe which runs and manages the data scrubbing and integrity policies. The file attribute, FILE_ATTRIBUTE_NO_SCRUB_DATA, will allow certain applications to skip this options and have these applications control integrity policies beyond what ReFS has to offer.
  • Shared storage pools across machines for additional fault tolerance and load balancing (ala Oracle RAC perhaps?)
  • Protection against bit rot. Silent data corruption, which I have blogged about many, many moons ago.

End-to-end resilient architecture is the goal in mind.

From a file structure standpoint, here’s how ReFS looks like:

ReFS is Copy-on-Write (COW). As you know, I am a big fan of any file systems but COW is one that I am most familiar with. NetApp’s Data ONTAP, Oracle Solaris, ZFS and the upcoming Linux BTRFS are all implementations of COW. Similar to BTRFS, ReFS uses a B+ tree implementation and as described in Wikipedia,

ReFS uses B+ trees for all on-disk structures including metadata and file data. The file size, total volume size, number of files in a directory and number of directories in a volume are limited by 64-bit numbers, which translates to maximum file size of 16 Exbibytes, maximum volume size of 1 Yobibyte (with 64 KB clusters), which allows large scalability with no practical limits on file and directory size (hardware restrictions still apply). Metadata and file data are organized into tables similar to relational database. Free space is counted by a hierarchal allocator which includes three separate tables for large, medium, and small chunks. File names and file paths are each limited to a 32 KB Unicode text string.

In ReFS, Microsoft introduces Storage Spaces. And the concept is very, very similar to what ZFS is, with the seamless implementation of a volume manager, RAID management, and highly resilient file system. And ZFS is 10 years old. So much for ReFS being “next generation“.  But here is a series of screenshots of how Storage Spaces looks like:

And similar to this “flexible volume management” ala ONTAP FlexVol and ZFS file systems, you can add disk drives on the fly, and grow your volumes online and real time.

ReFS inherits many of the NTFS features as it inches towards the Windows Server 8 launch date. Some of the features mentioned were the BitLocker encryption, Access Control List (ACL) for security (naturally), Symbolic Links, Volume Snapshots, File IDs and Opportunistic Locking (Oplocks).

ReFS is intended to scale to as what Microsoft says, “to extreme limits“. Here is a table describing those limits:

ReFS new technology will certainly bring Windows to the stringent availability and performance requirements of modern day file systems, but the storage networking world is also evolving into the cloud computing space. Object-based file systems are also getting involved as market trends dictate new requirements and file systems, in order to survive, must continue to evolve.

Microsoft’s file system, NTFS took a long time to come to this present version, ReFS, but can Microsoft continue to innovate to change the rules of the data storage game? We shall see …

IDC 4Q11 Tracker numbers are in

It was a challenging 2011 but the tremendous growth of data continues to spur the growth of storage. According to IDC in its latest Worldwide Quarterly Disk Storage Systems Tracker, the storage market grew a healthy 7.7% in factory revenues, and the total disk storage capacity shipped was 6,279 petabytes, up 22.4% year-on-year! What Greg Schulz once said was absolutely true. “There is no recession in storage” 

Let’s look at the numbers. Overall, the positions of the storage vendors did not change much, but to me, the more exciting part is the growth quarter over quarter.

EMC and NetApp continue to post double digit growth perennially, with 25.9% and 16.6% respectively. Once again, taking market share from HP and others. HP, with the upheaval that is going on right now throughout the organization, got hit badly with a decline of 3.8%, while IBM held ground of 0.0%. And a data growth of 0.0% when the data growth is at more than 20% is not good, not good at all.

HDS, continuing its momentum with a good story, took a decent 11.6% and a fantastic number from HDS’s perspective. I have been out with my HDS buddies and I can feel their excitement and energy that I have never seen before. And that is a good indicator of the innovation and new technology that is coming out from HDS. They just need to work on their marketing and tell the industry more about what they are doing. Japanese can be so modest.

From the report, 2 things peeved me.

  • IDC reports that NetApp and HP are *tied* at 3rd. This does not make any sense at all! How can they be tied when NetApp has double digit growth, 11.2% market share and a revenue of USD$734 million while HP has negative growth, 10.3% market share and USD$677 million revenue? The logic boggles my mind!
  • They lumped Dell and Oracle into others! And others had a -1.4% growth. I am eager to find out how these 2 companies are doing, especially Dell who has been touting superb growth with their storage story.

Meanwhile, in TOTAL, here’s the table for the Total Worldwide Disk Storage Market share for 4Q11.

Numbers don’t lie. HP and IBM, in both 2nd and 3rd place respectively, are not in good shape. Negative growth in an upward trending market spells more trouble in the long run, and they had better buck up.

In this table, Dell gained and went up to #4 ahead of NetApp and from the look of things, Dell is doing all the right things to make sure that their storage market story is gelling together. Kudos! In fact, NetApp’s position and perception in the last 2-3 quarters have been shaky with Dell and HDS breathing down its neck. There isn’t likely to be significant dent to NetApp by HDS or Dell at this point in time, but having been the “the little engine that could” (that’s what I used to call NetApp) for the last few years, NetApp seems to be losing a bit of the extra “ooomph” that has excited the market in the past.

Lastly, I just want to say that my comments are based on the facts and figures in the tables published by IDC. I remember the last time I commented with the same approach, buddies of mine in the industry disagreed with me, saying that each of them are doing great in the Malaysian or South East Asian market.

Sorry guys, I blog I was see and I welcome you to take me to your sessions to let me know how well you are doing here. I would be glad to write more about it. (Hint, hint).