Really? Disk is Dead? From Violin?

A catchy email from one of the forums I subscribed to, caught my attention. It goes something like “…Grateful … Disk is Dead“. Here the blog from Kevin Doherty, a Senior Account Manager at Violin Memory.

Coming from Violin Memory, this is pretty obvious because they have an agenda against HDDs. They don’t use any disks at all …. in any form factor. They use VIMMs (Violin Inline Memory Modules), something no vendor in the industry use today.

violin-memory4

I recalled my blog in 2012, titled “Violin pulling the strings“. It came up here in South Asia with much fan fare, lots of razzmatazz and there was plenty of excitement. I was even invited to their product training at Ingram Micro in Singapore and met their early SE, Mike Thompson. Mike is still there I believe, but the EMC veteran in Singapore whom I mentioned in my previous blog, left almost a year later after joining. So was the ex-Sun, General Manager of Violin Memory in Singapore.

Continue reading

No Flash in the pan

The storage networking market now is teeming with flash solutions. Consumers are probably sick to their stomach getting a better insight which flash solution they should be considering. There are so much hype, fuzz and buzz and like a swarm of bees, in the chaos of the moment, there is actually a calm and discerning pattern slowly, but surely, emerging. Storage networking guys would probably know this thing well, but for the benefit of the other readers, how we view flash (and other solid state storage) becomes clear with the picture below: Flash performance gap

(picture courtesy of  http://electronicdesign.com/memory/evolution-solid-state-storage-enterprise-servers)

Right at the top, we have the CPU/Memory complex (labelled as Processor). Our applications, albeit bytes and pieces of them, run in this CPU/Memory complex.

Therefore, we can see Pattern #1 showing up. Continue reading

The big boys better be flash friendly

An interesting article came up in the news this week. The article, from the ever popular The Register, mentioned 3 up and rising storage stars, Nimble Storage, Tintri and Tegile, and their assault on a flash strategy “blind spot” of the big boys, notably EMC and NetApp.

I have known about Nimble Storage and Tintri for a couple of years now, and I did take some time to read up on their storage technology offering. Tegile is new to me when it appeared on my radar after SearchStorage.com announced as the Gold Winner of the enterprise storage category for 2012.

The Register article intriqued me because it implied that these traditional storage vendors such as EMC and NetApp are probably doing a “band-aid” when putting together their flash storage strategy. And typically, I see these strategic concepts introduced by these 2 vendors:

  1. Have a server-side cache strategy by putting a PCIe card on the hosting server
  2. Have a network-based all-flash caching area
  3. Have a PCIe-based flash card on the storage system
  4. Have solid state drives (SSDs) in its disk shelves enclosures

In (1), EMC has VFCache (the server side caching software has been renamed to XtremSW Cache and under repackaging with the Xtrem brand name) and NetApp has it FlashAccel solution. Previously, as I was informed, FlashAccel was using the FusionIO ioTurbine solution but just days ago, NetApp expanded the LSI Nytro WarpDrive into its FlashAccel solution as well. The main objective of a server-side caching strategy using flash is to accelerate mostly read-based I/O operations for specific application workloads at the server side.

Continue reading

Expensive hard disk is good

No, I don’t mean to be bad, but the spinning HDDs’ prices will remain high even if the post-Thailand flood production has resumed to normalcy.

According to IHS iSuppli, a market research intelligence firm, the prices will continue to hold steady and will not fall to pre-flood level until 2014. The reason is simple. The prices of the hard disk drives are pretty much dictated by the only 2 real remaining hard disk companies in the world – Seagate and Western Digital. These guys controls more than 85% of the hard disk market and as demand of HDDs outstrips supply, the current hard disk prices are hitting the bottom line hard for just about everyone.

But the bad news is turning into good news for solid state storage devices. NAND-Flash based devices are driving a new clan of storage start-ups in the likes of Violin Memory, Kaminario, Pure Storage and Virident. The EMC acquisition of XtremIO was a strong endorsement that cements the cornerstone of all enterprise storage arrays to come. Even the Register predicted that the EMC VMAX will be the last primary storage array before the flash tsunami.

The NAND-Flash solid state of multi-level cells (MLCs) and single level cells (SLCs) and even triple level cells (TLCs) are going through birth, puberty, adolescent extremely fast because the demand for faster and faster IOPS, throughput and lower latency is hitting at full speed. And it is likely that all the xLCs (SLCs, MLCs and TLCs) could go through cycle in an extremely short lifespan, because there is a new class of solid state that is pushing the performance-price envelope closer and closer to speed of DRAM but with the price of Flash. This new type of solid state is Storage Class Memory (SCM). Continue reading

Violin pulling the strings

Violin Memory is in our shores as we speak. There are already confirmed news that an EMC veteran in Singapore has joined them and will be surfacing soon in the South Asia region.

Of all the all-Flash storage systems I have on my platter, Violin Memory seems to be the only one which is ready for IPO this year, after having taking in USD$75 million worth of funding in 2011. That was an impressive number considering the economic climate last year was not so great. But what is so great about Violin Memory that is attracting the big money? Both Juniper Networks and Toshiba America are early investors.

I am continuing my quest to look at all-Flash storage systems, after my blogs on Pure Storage, Kaminario and SolidFire. (Actually, I wanted to write about another all-Flash first because it keeps bugging me with its email .. but I feeling annoyed about that one right now). Violin Memory is here and now.

From a technology standpoint, there are a few key technologies, notably their vRAID and their Violin Switched Memory architecture (vXM), both patent pending. Let’s explore these 2 technologies.

At the core of Violin Memory is the vXM, a proprietary, patent-pending memory switching fabric, which Violin claims to be the first in the industry. The architecture uses high speed, fault tolerant memory controllers and FPGA (field programmable gate arrays) to switch between corresponding, fully redundant elements of VIMMs (Violin Inline Memory Modules). The high level vXM architecture is shown below:

 

VIMMs are the building blocks that are the culmination of memory modules, which can be from different memory types. The example below shows the culmination and aggregation of Toshiba MLC chips, which eventually bore the VIMMs and further consolidation into the full capacity Flash array.

The memory switching fabric of the vXM architecture enables very high speed in data switching and routing, and hence Violin can boast of having “spike-free latency“, something we in this industry desperately need.

Another cool technology that Violin has is their hardware-based vRAID. This is a RAID algorithm that is designed to work with Flash and other solid state storage devices. I am going through the Violin Memory white paper now and the technology is some crazy, complicated sh*t. This is presented in their website about the low latency, vRAID:

 

I don’t want to sound stupid writing about the vRAID now, and I probably need to digest the whitepaper several times in order to understand the technology better. And I will let you know once I have a fair idea of how this works.

More about Violin Memory later. Meanwhile, a little snag came up when a small Texas company, Narada Systems filed a suit of patent infringement against Violin on January 5, 2012. The suit mentioned that the vXM has violated the technology and intellectual property of patent #6,504,786 and #7,236,488 and hence claiming damages from Violin Memory. You can read about the legal suit here.

Whether this legal suit will affect Violin Memory is anybody’s guess but the prospects of Violin Memory going for IPO in just a few short years validates how the industry is looking at solid state storage solutions out there.

I have already mentioned a handful solid state storage players who are I called “all-Flash”, and from the Network Computing sites, blogger Howard Marks revealed 2 more stealth-mode, solid state start-ups in XtremIO and Proximal Data. This validates the industry’s confidence in solid state storage, and in 2012, we are going to see a goldrush in this technology.

The storage industry is dying for a revamp in the performance side, and living the bane of poor spinning disks performance for years, has made the market hungry for IOPS, low latency and throughput. Solid state storage is ripe and I hope this will trigger newer architectures in storage, especially RAID. Well done, Violin Memory!

 

Kaminario who?

The name “Kaminario” intrigues me and I don’t know the meaning of it. But there is a nice roll off the tongue until you say it a few times, fast and your tongue get twisted in a jiffy.

Kaminario is one of the few prominent startups in the all-flash storage space, getting USD$15 million Series C funding from big gun VCs of Sequoia and Globespan Capital Partners in 2011. That brought their total to USD$34 million, and also bringing them the attention of storage market.

I am beginning my research into their technology and their product line, the K2 and see why are they special. I am looking for an angle that differentiates them and how they position themselves in the market and why they deserved Series C funding.

Kaminario was founded in 2008, with their headquarters in Boston Massachusetts. They have a strong R&D facility in Israel and looking at their management lineup, they are headed by several personalities with an Israel background.

All this shouldn’t be a problem to many except the fact that Malaysia don’t recognize Israel diplomatically and some companies here, especially the government, might have an issue with the Israeli link. But then again, we have a lot of hypocrites in Malaysian politics and I am not going to there in my blog. It’s a waste of my time.

The key technology is Kaminario’s K2 SPEAR Architecture and it defines a fundamental method to store and retrieve performance-sensitive data. Yes, since this is an all-Flash storage solution, performance numbers, speeds and feeds are the “weapons” to influence prospects with high performance requirements. Kaminario touts their storage solution scales up to 1.5 million IOPS and 16GB/sec throughput and indeed they are fantastic numbers when you compare them with the conventional HDDs based storage platforms. But nowadays, if you are in the all-Flash game, everyone else is touting similar performance numbers as well. So, it is no biggie.

The secret sauce to the Kaminario technology is of course, its architecture – SPEAR. SPEAR stands for Scale-out Performance Storage Architecture. While Kaminario states that their hardware is pretty much off-the-shelf, open industry standard, somehow under the covers, the SPEAR architecture could have incorporate some special, proprietary design in its hardware to maximize the SPEAR technology. Hence, I believe there is a reason why Kaminario chose a blade-based system in the enclosures of its rack. Here’s a look at their hardware offering:

The idea using blades is a good idea because blades offers integrated wiring, consolidation, simple plug-and-play, ease-of-support, N+1 availability and so on. But this will also can put Kaminario in a position of all-blades or nothing. This is something some customers in Malaysia might have to get used to because many would prefer their racks. I could be wrong and let’s hope I am.

Each enclosure houses 16 blades, with N+1 availability. As I am going through Kaminario’s architecture, the word availability is becoming louder, and this could be something Kaminario is differentiating from the rest. Yes, Kaminario has the performance numbers, but Kaminario is also has a high-available (are we talking 6 nines?) architecture inherent within SPEAR. Of course, I have not done enough to compare Kaminario with the rest yet, but right now, availability isn’t something that most all-Flash startups trumpet loudly. I could be wrong but the message will become clearer when I go through my list of all-Flash – SolidFire, PureStorage, Virident, Violin Memory and Texas Memory Systems.

Each of the blades can be either an ioDirector or a DataNode, and they are interconnected internally with 1/10 Gigabit ports, with at least one blade acting as a standby blade to the rest in a logical group of production blades. The 10Gigabit connection are used for “data passing” between the blades for purpose of load-balancing as well as spreading out the availability function for the data. The Gigabit connection is used for management reasons.

In addition to that there is also a Fibre Channel piece that is fronting the K2 to the hosts in the SAN. Yes, this is an FC-SAN storage solution but since there was no mention of iSCSI, the IP-SAN capability is likely not there (yet).

 Here’s a look at the Kaminario SPEAR architecture:

The 2 key components are the ioDirector and the DataNode. A blade can either have a dedicated personality (either ioDirector or DataNode) or it can share both personalities in one blade. Minimum configuration is 2-blades of 2 ioDirectors for redundancy reasons.

The ioDirector is the front-facing piece. It presents to the SAN the K2 block-based LUNs and has the intelligence to dynamically load balance both Reads and Writes and also optimizing its resource utilization. The DataNode plays the role of fetching, storing, and backup and is pretty much the back-end worker.

With this description, there are 2 layers in the SPEAR architecture. And interestingly, while I mentioned that Kaminario is an all-Flash storage player, it actually has HDDs as well. The HDDs do not participate in the primary data serving and serve as containers for backup for the primary data in the SSDs, which can be MLC-Flash or DRAMs. The back-end backup layer comprising of HDDs is what I said earlier about availability. Kaminario is adding data availability as part of its differentiating features.

That’s the hardware layout of SPEAR, but the more important piece is its software, the SPEAR OS. It has 3 patent-pending  capabilities, with not so cool names (which are trademarked).

  1. Automated Data Distribution
  2. Intelligent Parallel I/O Processing
  3. Self Healing Data Availability

The Automated Data Distribution of the SPEAR OS acts as a balancer. It balances the data by dynamically and randomly (in an random equilibrium fashion, I think) to spread out the data over the storage capacity for efficiency, SSD longevity and of course, optimized performance balancing.

The second capability is Intelligent Parallel I/O Processing. The K2 architecture is essentially a storage grid. The internal 10Gigabit interconnects basically ties all nodes (ioDirectors and DataNodes) together in a grid-like fashion for the best possible intra-node communications. The parallelization of the I/O Read and Write requests spreads across the nodes in the storage grid, giving the best average response and service times.

Last but not least is the Self Healing Data Availability, a capability to dynamically reconfigure accessibility to the data in the event of node failure(s). Kaminario claims no single point of failure, which is something I am very interested to know if given a chance to assess the storage a bit deeper. So far, that’s the information I am able to get to.

The Kaminario K2 product line comes in 3 model – D, F, and H.

D is for DRAM only and F is for Flash MLC only. The H model is a combination of both Flash and DRAM SSDs. Here how Kaminario addresses each of the 3 models:

 

Kaminario is one of the early all-Flash storage systems that has gained recognition in 2011. They have been named a finalist in both Storage Magazine and SearchStorage Storage Product of the Year competitions for 2011. This not only endorses a brand new market for solid state storage systems but validates an entirely new category in the storage networking arena.

Kaminario can be one to watch in 2012 as with others that I plan to review in the coming weeks. The battle for Flash racks is coming!

BTW, Dell is a reseller of Kaminario.

Battle of flash racks coming soon

The battle is probably already here. It has just begun for rack mounted flash-based or DRAM-based (or both) storage systems.

We have read in the news about the launch of EMC’s Project Lightning, and I wrote about it. EMC is already stirring up the competition, aiming its guns at FusionIO. Here’s a slide from EMC comparing their VFCache with FusionIO.

Not to be outdone, NetApp set its motion to douse the razzmatazz of EMC’s Lightning, announcing the future availability of their server-side flash software (no PCIe card) but it will work with major host-based/server-side PCIe Flash cards. (FusionIO, heads up). Ah, in Sun Tsu Art of War, this is called helping your buddy fight the bigger enemy.

NetApp threw some FUDs into the battle zone, claiming that EMC VFCache only supports 300GB while the NetApp flash software will support 2TB, NetApp multiprotocol, and VMware’s VMotion, DRS and HA. (something that VFCache does not support now).

The battle of PCIe has begun.

The next battle will be for the rackmounted flash storage systems or appliance. EMC is following it up with Project Thunder (because thunder comes after lightning), which is a flash-based storage system or appliance. Here’s a look at EMC’s preliminary information on Project Thunder.

And here’s how EMC is positioning different storage tiers in the following diagram below (courtesy of VirtualGeek), being glued together by EMC FAST (Fully Automated Storage Tiering) technology.

But EMC is not alone, as there are already several prominent start-ups out there, already offering flash-based, rackmount storage systems.

In the battle ring, there is Kaminario K2 with the SPEAR (Scale-out Performance Storage Architecture), Violin Memory with Violin Switched Memory (VXM) architecture, Purestorage Purity Operating Environment and SolidFire’s Element OS, just to name a few. Of course, we should never discount the grand daddy of all flash-based storage – Texas Memory Systems RamSAN.

The whole motion of competition in this new arena is starting all over again and it’s exciting for me. There is so much to learn about newer, more innovative architecture and I intend to share more of these players in the coming blog entries. It is time to take notice because the SSDs are dropping in price, FAST! And in 2012, I strongly believe that this is the next battle of the storage players, both established and start-ups.

Let the battle begin!

 

All SSDs storage array? There’s more than meets the eye at Pure Storage

Wow, after an entire week off with the holidays, I am back and excited about the many happenings in the storage world.

One of the more prominent news was the announcement of Pure Storage launching its enterprise storage array build entirely with flash-based solid state drives. In addition to that, there were other start-ups who were also offering SSDs storage arrays. The likes of Nimbus Data, Avere, Violin Memory Systems all made the news as well as the grand daddy of solid state storage arrays, Texas Memory Systems.

The first thing that came to my mind was, “Wow, this is great because this will push down the $/GB of SSDs closer to the range of $/GB for spinning disks”. But then skepticism crept in and I thought, “Do we really need an entire enterprise storage array of SSDs? That’s going to cost the world”.

At the same time, we in the storage industry knows that no piece of data are alike. They can be large, small, random, sequential, accessed frequently or infrequently and so on. It is obviously better to tier the storage, using SSDs for Tier 0, 10K/15K RPM spinning HDDs for Tier 1, SATA for Tier 2 and perhaps tape for the archive tier. I was already tempted to write my pessimism on Pure Storage when something interesting caught my attention.

Besides the usual marketing jive of sub-milliseconds, predictable latency, green messaging, global inline deduplication and compression and built-in data integrity into its Purity Operating Environment (POE), I was very surprised to find the team behind Pure Storage. Here’s their line-up

  • Scott Dietzen, CEO – starting from principal technologist of Transarc (sold to IBM), principal architect of Web Logic (sold to BEA Systems), CTO of BEA (sold to Oracle), CTO of Zimbra (sold to Yahoo! and then to VMware)
  • John “Coz” Colgrove, Founder & CTO – Veritas Fellow, CTO of Symantec Data Management group, principal architect of Veritas Volume Manager (VxVM) and Veritas File System (VxFS) and holder of 70 patents
  • John Hayes, Founder & Chief Architect – formerly of  Yahoo! office of Chief Technologist
  • Bob Wood, VP of Engineering – Formerly NetApp’s VP of File System Engineering,
  • Michael Cornwell, Director of Technology & Strategy – formerly the lead technologist of Sun Microsystems’ Sun Storage F5100 Flash Array and also Quantum’s storage architect for their storage telemetry, VTL and DXi solutions
  • Ko Yamamoto, VP of System Engineering – previously NetApp’s director of platform engineering, Quantum DXi director of hardware engineering, and also key contributor to 4-generations of Tandem NonStop technology

In addition to that, there are 3 key individual investors worth mentioning

  • Diane Green – Founder of VMware and former CEO
  • Dr. Mendel Rosenblum – Founder and former Chief Scientist and creator of VMware
  • Frank Slootman – formerly CEO of Data Domain (acquired by EMC)

All these industry big guns are flocking to Pure Storage for a reason and it looks to me that Pure Storage ain’t your ordinary, run-of-the-mill enterprise storage company. There’s definitely more than meet the eye.

On top of the enterprise storage array platform is Pure Storage’s Purity Operating Environment (POE). POE focuses on 3 key storage services which are

  • High Performance Data Reduction
  • Mission Critical Reliability
  • Predictable Sub-millisecond Performance

After going through the deep-dive videos by Pure Storage’s CTO, John Colgrove, they are very much banking the success of their solution around SSDs. Everything that they have done is based on SSDs.  For example, in order to achieve a larger capacity as well as a much cheaper $/GB, the data reduction techniques in global deduplication, high compression and also fine grained thin provision of 512 bytes are used. By trading off IOPS (which SSDs have plenty since they are several times faster than conventional spinning disks), a larger usable capacity is achieved.

In their RAID 3D, they also incorporated several high reliability techniques and data integrity algorithm that are specifically for SSDs. One note that was mentioned was that traditional RAID and especially the parity-based RAID levels were designed in the beginning to protect against an entire device failure. However, in SSDs, the failure does not necessarily occur in the entire device. Because of the way SSDs are built, the failure hotspots tend to happen at the much more granular bit level of the SSDs. The erase-then-write techniques that are inherent in NAND Flash SSDs causes the bit error rate (BER) of the SSD device to go up as the device ages. Therefore, it is more likely to get a read/write error from within the SSDs memory itself rather than having the entire SSD device failing. Pure Storage RAID 3D is meant to address such occurrences of bit errors.

I spoke a bit of storage tiering earlier in this article because every corporation employs storage tiering to be financially responsible. However, John Colgrove’s argument was why tier the storage when there’s plentiful of IOPS and the $/GB is comparable to spinning disks. That is true is when the $/GB of SSDs can match the $/GB of spinning disks. Factors we must also taken into account is the rack-space savings using the smaller profile disks of SSDs, the power-savings costs of SSDs versus conventional HDD-based enterprise storage arrays. In its entirety, there are strong indications that the $/GB of SSD-based systems to match or perhaps lower the $/GB of HDD-based systems. And since the IOPS requirement levels of present-day applications have not demanded super-high IOPS and multi-core processing is cheap, there’s plenty of head-room for Pure Storage and other similar enterprise storage array companies to grow.

The tides are changing for the storage industry and it is good to see a start-up like Pure Storage boldly coming forth to announce their backing for SSDs. It’s good for the consumer and good for the industry. But more importantly, they are driving innovations to rethink of how we build storage arrays. I am looking forward to more things to come.

Solid State Drives … are they reliable?

There’s been a lot of questions about Solid State Drives (SSD), aka Enterprise Flash Drives (EFD) by some vendors. Are they less reliable than our 10K or 15K RPM hard disk drives (HDDs)? I was asked this question in the middle of the stage when I was presenting the topic of Green Storage 3 weeks ago.

Well, the usual answer from the typical techie is … “It depends”.

We all fear the unknown and given the limited knowledge we have about SSDs (they are fairly new in the enterprise storage market), we tend to be drawn more to the negatives than the positives of what SSDs are and what they can be. I, for one, believe that SSDs have more positives and over time, we will grow to accept that this is all part of what the IT evolution. IT has always evolved into something better, stronger, faster, more reliable and so on. As famously quoted by Jeff Goldblum’s character Dr. Ian Malcolm, in the movie Jurassic Park I, “Life finds a way …”, IT will always find a way to be just that.

SSDs are typically categorized into MLCs (multi-level cells) and SLCs (single-level cells). They have typically predictable life expectancy ranging from tens of thousands of writes to more than a million writes per drive. This, by no means, is a measure of reliability of the SSDs versus the HDDs. However, SSD controllers and drives employ various techniques to enhance the durability of the drives. A common method is to balance the I/O accesses to the disk block to adapt the I/O usage patterns which can prolong the lifespan of the disk blocks (and subsequently the drives itself) and also ensure performance of the drive does not lag since the I/O is more “spread-out” in the drive. This is known as “wear-leveling” algorithm.

Most SSDs proposed by enterprise storage vendors are MLCs to meet the market price per IOP/$/GB demand because SLC are definitely more expensive for higher durability. Also MLCs have higher BER (bit-error-rate) and it is known than MLCs have 1 BER per 10,000 writes while SLCs have 1 BER per 100,000 writes.

But the advantage of SSDs clearly outweigh HDDs. Fast access (much lower latency) is one of the main advantages. Higher IOPS is another one. SSDs can provide from several thousand IOPS to more than 1 million IOPS when compared to enterprise HDDs. A typical 7,200 RPM SATA drive has less than 120 IOPS while a 15,000 RPM Fibre Channel or SAS drive ranges from 130-200 IOPS. That IOPS advantage is definitely a vast differentiator when comparing SSDs and HDDs.

We are also seeing both drive-format and card-format SSDs in the market. The drive-format type are typically in the 2.5″ and 3.5″ profile and they tend to fit into enterprise storage systems as “disk drives”. They are known to provide capacity. On the other hand, there are also card-format type of SSDs, that fit into a PCIe card that is inserted into host systems. These tend to address the performance requirement of systems and applications. The well known PCIe vendors are Fusion-IO which is in the high-end performance market and NetApp which peddles the PAM (Performance Access Module) card in its filers. The PAM card has been renamed as FlashCache. Rumour has it that EMC will be coming out with a similar solution soon.

Another to note is that SSDs can be read-biased or write-biased. Most SSDs in the market tend to be more read-biased, published with high read IOPS, not write IOPS. Therefore, we have to be prudent to know what out there. This means that some solution, such as the NetApp FlashCache, is more suitable for heavy-read I/O rather than writes I/O. The FlashCache addresses a large segment of the enterprise market because most applications are heavy on reads than writes.

SSDs have been positioned as Tier 0 layer in the Automated Storage Tiering segment of Enterprise Storage. Vendors such as Dell Compellent, HP 3PAR and also EMC FAST2 position themselves with enhanced tiering techniques to automated LUN and sub-LUN tiering and customers have been lapping up this feature like little puppies.

However, an up-and-coming segment for SSDs usage is positioning the SSDs as extended read or write cache to the existing memory of the systems. NetApp’s Flashcache is a PCIe solution that is basically an extended read cache. An interesting feature of Oracle Solaris ZFS called Hybrid Storage Pool allows the creation of read and write cache using SSDs. The Sun fellas even come up with cool names – ReadZilla and LogZilla – for this Hybrid Storage Pool features.

Basically, I have poured out what I know about SSDs (so far) and I intend to learn more about it. SNIA (Storage Networking Industry Association) has a Technical Working Group for Solid State Storage. I advise the readers to check it out.