Server way of locked-in storage

It is kind of interesting when every vendor out there claims that they are as open as they can be but the very reality is, the competitive nature of the game is really forcing storage vendors to speak open, but their actions are certainly not.

Confused? I am beginning to see a trend … a trend that is forcing customers to be locked-in with a certain storage vendor. I am beginning to feel that customers are given lesser choices, especially when the brand of the server they select for their applications  will have implications on the brand of storage they will be locked in into.

And surprise, surprise, SSDs are the pawns of this new cloak-and-dagger game. How? Well, I have been observing this for quite a while now, and when HP announced their SMART portfolio for their storage, it’s time for me to say something.

In the announcement, it was reported that HP is coming out with its 8th generation ProLiant servers. As quoted:

The eighth generation ProLiant is turbo-charging its storage with a Smart Array containing solid state drives and Smart Caching.

It also includes two Smart storage items: the Smart Array controllers and Smart Caching, which both feature solid state storage to solve the disk I/O bottleneck problem, as well as Smart Data Services software to use this hardware

From the outside, analysts are claiming this is a reaction to the recent EMC VFCache product. (I blogged about it here) and HP was there to put the EMC VFcache solution as a first generation product, lacking the smarts (pun intended) of what the HP products have to offer. You can read about its performance prowess in the HP Connect blog.

Similarly, Dell announced their ExpressFlash solution that ties up its 12th generation PowerEdge servers with their flagship (what else), Dell Compellent storage.

The idea is very obvious. Put in a PCIe-based flash caching card in the server, and use a condescending caching/tiering technology that ties the server to a certain brand of storage. Only with this card, that (incidentally) works only with this brand of servers, will you, Mr. Customer, be able to take advantage of the performance power of this brand of storage. Does that sound open to you?

HP is doing it with its ProLiant servers; Dell is doing it with its ExpressFlash; EMC’s VFCache, while not advocating any brand of servers, is doing it because VFCache works only with EMC storage. We have seen Oracle doing it with Oracle ExaData. Oracle Enterprise database works best with Oracle’s own storage and the intelligence is in its SmartScan layer, a proprietary technology that works exclusively with the storage layer in the Exadata. Hitachi Japan, with its Hitachi servers (yes, Hitachi servers that we rarely see in Malaysia), already has such a technology since the last 2 years. I wouldn’t be surprised that IBM and Fujitsu already have something in store (or probably I missed the announcement).

NetApp has been slow in the game, but we hope to see them coming out with their own server-based caching products soon. More pure play storage are already singing the tune of SSDs (though not necessarily server-based).

The trend is obviously too, because the messaging is almost always about storage performance.

Yes, I totally agree that storage (any storage) has a performance bottleneck, especially when it comes to IOPS, response time and throughput. And every storage vendor is claiming SSDs, in one form or another, is the knight in shining armour, ready to rid the world of lousy storage performance. Well, SSDs are not the panacea of storage performance headaches because while they solve some performance issues, they introduce new ones somewhere else.

But it is becoming an excuse to introduce storage vendor lock-in, and how has the customers responded this new “concept”? Things are fairly new right now, but I would always advise customers to find out and ask questions.

Cloud storage for no vendor lock-in? Going to the cloud also has cloud service provider lock-in as well, but that’s another story.

 

We raid vRAID

I took a bit of time off to read through Violin’s vRAID technology because I realized that vRAID (other than Violin’s vXM architecture) is the other most important technology that differentiates Violin Memory from the other upstarts. I blogged at a high-level about Violin a few entries ago, and we are continuing Violin impressive entrance with a storage technology that have been around for almost 25 years – RAID. Incidentally, I found this picture of the original RAID paper (see below):

Has RAID evolved with solid state storage? Evidently, no, because I have not read of any vendors (so far) touting any RAID revolution in their solid state offerings. There has been a lot of negative talks about RAID, but RAID has been the cornerstone and the foundation of storage ever since the beginning. But with the onslaughts of very large capacity HDDs, the demands of packing more bits-per-inch and the insatiable needs for reliability, RAID is slowly beginning to show its age. Cracks in the armour, I would say. And there are many newer, slightly more refined versions of RAID, from the Network RAID-style of HP P4000 or the Dell EqualLogic, to the RAID-X of IBM XIV, to innovations of declustered RAID in Panasas. (Interestingly, one of the early founders of the actual RAID concept paper, Garth Gibson, is the founder of Panasas).

And the new vRAID from Violin-System doesn’t sway much from the good ol’ RAID, but it has been adapted to address the issues of Solid State Devices.

Solid State devices (notably NAND Flash since everyone is using them) are very different from the usual spinning disks of HDDs. They behave differently and pairing solid state devices with the present implementations of RAID could be like mixing oil and water. I am not saying that the present RAID cannot work with solid state devices, but has RAID adapted to the idiosyncrasies of Flash?

It is like putting an old crank shaft into a new car. It might work for a while, but in the long run, it could damage the car. Similarly, conventional RAID might have detrimental performance and availability impact with solid state devices. And we have hardly seen storage vendors coming up to say that their RAID technology has been adapted to the solid state devices that they are selling. This silence could likely mean that they are just adapting to market requirements and not changing their RAID codes very much to take advantage of Flash or other solid state storage for that matter. Violin Memory has boldly come forward to meet that requirement and vRAID is their answer.

Violin argues that there are bottlenecks at the external RAID controller or software RAID level as well as use of legacy disk drive interfaces. And this is indeed true, because this very common RAID implementation squeezes performance at the expense of the other components such as CPU cycles.

Furthermore, there are plenty of idiosyncrasies in Flash with things such as erase-first, then write mechanism. The nature of NAND Flash, unlike DRAM, requires a block to be erased first before a write to the block is allowed. It does not “modify” per se, where the operations of read-modify-write is often applied in parity-based RAIDs of 5 and 6. Because of this nature, it is more like read-erase-write, and when the erase of the block is occurring, the read operation is stalled. That is why most SSDs will have impressive read latency (in microseconds), but very poor writes (in milliseconds). Furthermore, the parity-based RAID’s write penalty, can further aggravate the situation when the typical RAID technology is applied to NAND Flash solid state storage.

As the blocks in the NAND Flash build up, the accumulation of read-erase-write will not only reduce the lifespan of the blocks in the NAND Flash, it will also reduce the IOPS to a state we called Normalized Steady State. I wrote about this in my blog, “Not all SSDs are the same” some moons ago. In my blog, SNIA Solid State Storage Performance Testing Suite (SSS-PTS), there were 3 distinct phases of a typical NAND Flash SSD:

  • Fresh of out the Box (FOB)
  • Transition
  • Steady State
This performance degradation is part of what vendors call “Write Cliff”, where there is a sudden drop in IOPS performance as the NAND Flash SSD ages. Here’s a graph that shows the performance drop.
Violin’s vRAID, implemented within its switched vXM architecture itself, and using proprietary high performance flash controllers and the flash-optimized vRAID technology, is able deliver sustained IOPS throughout the lifespan of the flash SSD, as shown below:
To understand vRAID we have to understand the building blocks of the Violin storage array. NAND Flash chips of 4GB are packed into a Flash Package of 8 giving it 32GB. And 16 of these 32GB Flash Package are then consolidated into a 512GB VIMM (Violin Inline Memory Module). The VIMM is the starting block and can be considered as a “disk”, since we are used to the concept of “disk” in the storage networking world. 5 of these VIMMs will create a RAID group of 4+1 (four data and one parity), giving the redundancy, performance and capacity similar to RAID-5.
The block size used is 4K block and this 4K block is striped across the RAID group with 1K pages each on each of the VIMMs in the RAID group. Each of this 1K page is managed independently and can be placed anywhere in any flash block in the VIMMs, and spread out for lowest possible latency and bandwidth. This contributes to the “spike free latency” of Violin Memory. Additionally, there is ECC protection within each 1K page to correct flash bit error.
To protect against metadata corruption, there is an additional, built-in RAID Check bit to correct the VIMM errors. Lastly, one important feature that addresses the read-erase-write weakness of NAND Flash, the vRAID ensures that the slow erases never block a Read or a Write. This architectural feature enable spike-free latency in mixed Read/Write environments.
Here’s a quick overview of Violin’s vRAID architecture:
I still feel that we need a radical move away from the traditional RAID and vRAID is moving in the right direction to evolve RAID to meet the demands of the data storage market. Revolutionary and radical it may not be, but then again, is the market ready for anything else?
As I said, so far Violin is the only all-Flash vendor that has boldly come forward to meet the storage latency problem head-on, and they have been winning customers very quickly. Well done!

Violin pulling the strings

Violin Memory is in our shores as we speak. There are already confirmed news that an EMC veteran in Singapore has joined them and will be surfacing soon in the South Asia region.

Of all the all-Flash storage systems I have on my platter, Violin Memory seems to be the only one which is ready for IPO this year, after having taking in USD$75 million worth of funding in 2011. That was an impressive number considering the economic climate last year was not so great. But what is so great about Violin Memory that is attracting the big money? Both Juniper Networks and Toshiba America are early investors.

I am continuing my quest to look at all-Flash storage systems, after my blogs on Pure Storage, Kaminario and SolidFire. (Actually, I wanted to write about another all-Flash first because it keeps bugging me with its email .. but I feeling annoyed about that one right now). Violin Memory is here and now.

From a technology standpoint, there are a few key technologies, notably their vRAID and their Violin Switched Memory architecture (vXM), both patent pending. Let’s explore these 2 technologies.

At the core of Violin Memory is the vXM, a proprietary, patent-pending memory switching fabric, which Violin claims to be the first in the industry. The architecture uses high speed, fault tolerant memory controllers and FPGA (field programmable gate arrays) to switch between corresponding, fully redundant elements of VIMMs (Violin Inline Memory Modules). The high level vXM architecture is shown below:

 

VIMMs are the building blocks that are the culmination of memory modules, which can be from different memory types. The example below shows the culmination and aggregation of Toshiba MLC chips, which eventually bore the VIMMs and further consolidation into the full capacity Flash array.

The memory switching fabric of the vXM architecture enables very high speed in data switching and routing, and hence Violin can boast of having “spike-free latency“, something we in this industry desperately need.

Another cool technology that Violin has is their hardware-based vRAID. This is a RAID algorithm that is designed to work with Flash and other solid state storage devices. I am going through the Violin Memory white paper now and the technology is some crazy, complicated sh*t. This is presented in their website about the low latency, vRAID:

 

I don’t want to sound stupid writing about the vRAID now, and I probably need to digest the whitepaper several times in order to understand the technology better. And I will let you know once I have a fair idea of how this works.

More about Violin Memory later. Meanwhile, a little snag came up when a small Texas company, Narada Systems filed a suit of patent infringement against Violin on January 5, 2012. The suit mentioned that the vXM has violated the technology and intellectual property of patent #6,504,786 and #7,236,488 and hence claiming damages from Violin Memory. You can read about the legal suit here.

Whether this legal suit will affect Violin Memory is anybody’s guess but the prospects of Violin Memory going for IPO in just a few short years validates how the industry is looking at solid state storage solutions out there.

I have already mentioned a handful solid state storage players who are I called “all-Flash”, and from the Network Computing sites, blogger Howard Marks revealed 2 more stealth-mode, solid state start-ups in XtremIO and Proximal Data. This validates the industry’s confidence in solid state storage, and in 2012, we are going to see a goldrush in this technology.

The storage industry is dying for a revamp in the performance side, and living the bane of poor spinning disks performance for years, has made the market hungry for IOPS, low latency and throughput. Solid state storage is ripe and I hope this will trigger newer architectures in storage, especially RAID. Well done, Violin Memory!

 

Solid?

The next all-Flash product in my review list is SolidFire. Immediately, the niche that SolidFire is trying to carve out is obvious. It’s not for regular commercial customers. It is meant for Cloud Service Providers, because the features and the technology that they have innovated are quite cloud-intended.

Are they solid (pun intended)? Well, if they have managed to secure a Series B funding of USD$25 million (total of USD$37 million overall) from VCs such as NEA and Valhalla, and also angel investors such as Frank Slootman (ex-Data Domain CEO) and Greg Papadopoulus(ex-Sun Microsystems CTO), then obviously there is something more than meets the eye.

The one thing I got while looking up SolidFire is there is probably a lot of technology and innovation behind their  Nodes and their Element OS. They hold their cards very, very close to their chest, and I couldn’t not get much good technology related information from their website or in Google. But here’s a look of how the SolidFire is like:

The SolidFire only has one product model, and that is the 1U SF3010. The SF3010 has 10 x 2.5″ 300GB SSDs giving it a raw total of 3TB per 1U. The minimum configuration is 3 nodes, and it scales to 100 nodes. The reason for starting with 3 nodes is of course, for redundancy. Each SF3010 node has 8GB NVRAM and 72GB RAM and sports 2 x 10GbE ports for iSCSI connectivity, especially when the core engineering talents were from LeftHand Networks. LeftHand Networks product is now HP P4000. There is no Fibre Channel or NAS front end to the applications.

Each node runs 2 x Intel Xeon 2.4GHz 6-core CPUs. The 1U height is important to the cloud provider, as the price of floor space is an important consideration.

Aside from the SF3010 storage nodes, the other important ingredient is their SolidFire Element OS.

Cloud storage needs to be available. The SolidFire Helix Self-Healing data protection is a feature that is capable of handling multiple concurrent failures across all levels of their storage. Data blocks are replicated randomly but intelligently across all storage nodes to ensure that the failure or disruption of access to a particular data block is circumvented with another copy of the data block somewhere else within the cluster. The idea is not new, but effective because solutions such as EMC Centera and IBM XIV employ this idea in their data availability. But still, the ability for self-healing ensures a very highly available storage where data is always available.

To address the efficiency of storage, having 3TB raw in the SF3010 is definitely not sufficient. Therefore, the Element OS always have thin provision, real-time compression and in-line deduplication turned on. These features cannot be turned off and operate at a fine-grained 4K blocks. Also important is the intelligence to reclaim of zeroed blocks, no-reservation,  and no data movement in these innovations. This means that there will be no I/O impact, as claimed by SolidFire.

But the one feature that differentiates SolidFire when targeting storage for Cloud Service Providers is their guaranteed volume level Quality of Service (QOS). This is important and SolidFire has positioned their QOS settings into an advantage. As best practice, Cloud Service Providers should always leverage the QOS functionality to improve their storage utilization

The QOS has:

  • Minimum IOPS – Lower IOPS means lower performance priority (makes good sense)
  • Maximum IOPS
  • Burst IOPS – for those performance spikes moments
  • Maximum and Burst MB/sec

The combination of QOS and storage capacity efficiency gives SolidFire the edge when cloud providers can scale both performance and capacity in a more balanced manner, something that is not so simple with traditional storage vendors that relies on lots of spindles to achieve IOPS performance sacrificing capacity in the process. But then again, with SSDs, the IOPS are plenty (for now). SolidFire does not boast performance numbers of millions of IOPS or having throughput into the tens of Gigabytes like Violin, Virident or Kaminario, but what they want to be recognized as the cloud storage as it should be in a cloud service provider environment.

SolidFire calls this Performance Virtualization. Just as we would get to carve our storage volumes from a capacity pool, SolidFire allows different performance profiles to be carved out from the performance pool. This gives SolidFire the ability to mix storage capacity and storage performance in a seemingly independent manner, customizing the type of storage bundling required of cloud storage.

In fact, SolidFire only claims 50,000 IOPS per storage node (including the IOPS means for replicating data blocks). Together with their native multi-tenancy capability, the 50,000 or so IOPS will align well with many virtualized applications, rather than focusing on a 10x performance improvement on a single applications. Their approach is more about a more balanced and spread-out I/O architecture for cloud service providers and the applications that they service.

Their management is also targeted to the cloud. It has a REST API that integrates easily into OpenStack, Citrix CloudStack and VMware vCloud Director. This seamless and easy integration, is more relevant because the CSPs already have their own management tools. That is why SolidFire API is a REST-ready, integration ready to do just that.

The power of the SolidFire API is probably overlooked by storage professionals trained in the traditional manner. But what SolidFire API has done is to provide the full (I mean FULL) capability of the management and provisioning of the SolidFire storage. Fronting the API with REST means that it is real easy to integrate with existing CSP management interface.

Together with the Storage Nodes and the Element OS, the whole package is aimed towards a more significant storage platform for Cloud Service Providers(CSPs). Storage has always been a tricky component in Cloud Computing (despite what all the storage vendors might claim), but SolidFire touts that their solution focuses on what matters most for CSPs.

CSPs would want to maximize their investment without losing their edge in the cloud offerings to their customers. SolidFire lists their benefits in these 3 areas:

  • Performance
  • Efficiency
  • Management

The edge in cloud storage is definitely solid for SolidFire. Their ability to leverage on their position and steering away from other all-Flash vendors’ battlezone could all make sense, as they aim to gain market share in the Cloud Service Provider space. I only wish they can share more about their technology online.

Fortunately, I found a video by SolidFire’s CEO, Dave Wright which gives a great insight about SolidFire’s technology. Have a look (it’s almost 2 hour long):

[2 hours later]: Phew, I just finished the video above and the technology is solid. Just to summarize,

  • No RAID (which is a Godsend for service providers)
  • Aiming for USD5.00 or less per Gigabyte (a good number!)
  • General availability in Q1 2012

Lots of confidence about the superiority of their technology, as portrayed by their CEO, Dave Wright.

Solid? Yes, Solid!

Kaminario who?

The name “Kaminario” intrigues me and I don’t know the meaning of it. But there is a nice roll off the tongue until you say it a few times, fast and your tongue get twisted in a jiffy.

Kaminario is one of the few prominent startups in the all-flash storage space, getting USD$15 million Series C funding from big gun VCs of Sequoia and Globespan Capital Partners in 2011. That brought their total to USD$34 million, and also bringing them the attention of storage market.

I am beginning my research into their technology and their product line, the K2 and see why are they special. I am looking for an angle that differentiates them and how they position themselves in the market and why they deserved Series C funding.

Kaminario was founded in 2008, with their headquarters in Boston Massachusetts. They have a strong R&D facility in Israel and looking at their management lineup, they are headed by several personalities with an Israel background.

All this shouldn’t be a problem to many except the fact that Malaysia don’t recognize Israel diplomatically and some companies here, especially the government, might have an issue with the Israeli link. But then again, we have a lot of hypocrites in Malaysian politics and I am not going to there in my blog. It’s a waste of my time.

The key technology is Kaminario’s K2 SPEAR Architecture and it defines a fundamental method to store and retrieve performance-sensitive data. Yes, since this is an all-Flash storage solution, performance numbers, speeds and feeds are the “weapons” to influence prospects with high performance requirements. Kaminario touts their storage solution scales up to 1.5 million IOPS and 16GB/sec throughput and indeed they are fantastic numbers when you compare them with the conventional HDDs based storage platforms. But nowadays, if you are in the all-Flash game, everyone else is touting similar performance numbers as well. So, it is no biggie.

The secret sauce to the Kaminario technology is of course, its architecture – SPEAR. SPEAR stands for Scale-out Performance Storage Architecture. While Kaminario states that their hardware is pretty much off-the-shelf, open industry standard, somehow under the covers, the SPEAR architecture could have incorporate some special, proprietary design in its hardware to maximize the SPEAR technology. Hence, I believe there is a reason why Kaminario chose a blade-based system in the enclosures of its rack. Here’s a look at their hardware offering:

The idea using blades is a good idea because blades offers integrated wiring, consolidation, simple plug-and-play, ease-of-support, N+1 availability and so on. But this will also can put Kaminario in a position of all-blades or nothing. This is something some customers in Malaysia might have to get used to because many would prefer their racks. I could be wrong and let’s hope I am.

Each enclosure houses 16 blades, with N+1 availability. As I am going through Kaminario’s architecture, the word availability is becoming louder, and this could be something Kaminario is differentiating from the rest. Yes, Kaminario has the performance numbers, but Kaminario is also has a high-available (are we talking 6 nines?) architecture inherent within SPEAR. Of course, I have not done enough to compare Kaminario with the rest yet, but right now, availability isn’t something that most all-Flash startups trumpet loudly. I could be wrong but the message will become clearer when I go through my list of all-Flash – SolidFire, PureStorage, Virident, Violin Memory and Texas Memory Systems.

Each of the blades can be either an ioDirector or a DataNode, and they are interconnected internally with 1/10 Gigabit ports, with at least one blade acting as a standby blade to the rest in a logical group of production blades. The 10Gigabit connection are used for “data passing” between the blades for purpose of load-balancing as well as spreading out the availability function for the data. The Gigabit connection is used for management reasons.

In addition to that there is also a Fibre Channel piece that is fronting the K2 to the hosts in the SAN. Yes, this is an FC-SAN storage solution but since there was no mention of iSCSI, the IP-SAN capability is likely not there (yet).

 Here’s a look at the Kaminario SPEAR architecture:

The 2 key components are the ioDirector and the DataNode. A blade can either have a dedicated personality (either ioDirector or DataNode) or it can share both personalities in one blade. Minimum configuration is 2-blades of 2 ioDirectors for redundancy reasons.

The ioDirector is the front-facing piece. It presents to the SAN the K2 block-based LUNs and has the intelligence to dynamically load balance both Reads and Writes and also optimizing its resource utilization. The DataNode plays the role of fetching, storing, and backup and is pretty much the back-end worker.

With this description, there are 2 layers in the SPEAR architecture. And interestingly, while I mentioned that Kaminario is an all-Flash storage player, it actually has HDDs as well. The HDDs do not participate in the primary data serving and serve as containers for backup for the primary data in the SSDs, which can be MLC-Flash or DRAMs. The back-end backup layer comprising of HDDs is what I said earlier about availability. Kaminario is adding data availability as part of its differentiating features.

That’s the hardware layout of SPEAR, but the more important piece is its software, the SPEAR OS. It has 3 patent-pending  capabilities, with not so cool names (which are trademarked).

  1. Automated Data Distribution
  2. Intelligent Parallel I/O Processing
  3. Self Healing Data Availability

The Automated Data Distribution of the SPEAR OS acts as a balancer. It balances the data by dynamically and randomly (in an random equilibrium fashion, I think) to spread out the data over the storage capacity for efficiency, SSD longevity and of course, optimized performance balancing.

The second capability is Intelligent Parallel I/O Processing. The K2 architecture is essentially a storage grid. The internal 10Gigabit interconnects basically ties all nodes (ioDirectors and DataNodes) together in a grid-like fashion for the best possible intra-node communications. The parallelization of the I/O Read and Write requests spreads across the nodes in the storage grid, giving the best average response and service times.

Last but not least is the Self Healing Data Availability, a capability to dynamically reconfigure accessibility to the data in the event of node failure(s). Kaminario claims no single point of failure, which is something I am very interested to know if given a chance to assess the storage a bit deeper. So far, that’s the information I am able to get to.

The Kaminario K2 product line comes in 3 model – D, F, and H.

D is for DRAM only and F is for Flash MLC only. The H model is a combination of both Flash and DRAM SSDs. Here how Kaminario addresses each of the 3 models:

 

Kaminario is one of the early all-Flash storage systems that has gained recognition in 2011. They have been named a finalist in both Storage Magazine and SearchStorage Storage Product of the Year competitions for 2011. This not only endorses a brand new market for solid state storage systems but validates an entirely new category in the storage networking arena.

Kaminario can be one to watch in 2012 as with others that I plan to review in the coming weeks. The battle for Flash racks is coming!

BTW, Dell is a reseller of Kaminario.