Using simple MTBF to determine reliability to Finance

The other day, a prospect was requesting quotations after quotations from a friend of mine to make so-called “apple-to-apple” comparison with another storage vendor. But it was difficult to have that sort of comparisons because one guy would propose SAS, and the other SATA and so on. I was roped in by my friend to help. So in the end I asked this prospect, which 3 of these criteria matters to him most – Performance, Capacity or Reliability.

He gave me an answer and the reliability criteria was leading his requirement. Then he asked me if I could help determine in a “quick-and-dirty manner” by using MTBF (Mean Time Between Failure) of the disks to convince his finance about the question of reliability.

Well, most HDD vendors published their MTBF as a measuring stick to determine the reliability of their manufactured disks. MTBF is by no means accurate but it is useful to define HDD reliability in a crude manner. If you have seen the components that goes into a HDD, you would be amazed that the HDD components go through a tremendously stressed environment. The Read/Write head operating at a flight height (head gap)  between the platters thinner than a human hair and the servo-controlled technology maintains the constant, never-lagging 7200/10,000/15,000 RPM days-after-days, months-after-months, years-after-years. And it yet, we seem to take the HDD for granted, rarely thinking how much technology goes into it on a nanoscale. That’s technology at its best – bringing something so complex to make it so simple for all of us.

I found that the Seagate Constellation.2 Enterprise-class 3TB 7200 RPM disk MTBF is 1.2 million hours while the Seagate Cheetah 600GB 10,000 RPM disk MTBF is 1.5 million hours. So, the Cheetah is about 30% more reliable than the Constellation.2, right?

Wrong! There are other factors involved. In order to achieve 3TB usable, a RAID 1 (average write performance, very good read performance) would require 2 units of 3TB 7200 RPM disks. On the other hand, using a 10, 000 RPM disks, with the largest shipping capacity of 600GB, you would need 10 units of such HDDs. RAID-DP (this is NetApp by the way) would give average write performance (better than RAID 1 in some cases) and very good read performance (for sequential access).

So, I broke down the above 2 examples to this prospect (to achieve 3TB usable)

  1. Seagate Constellation.2 3TB 7200 RPM HDD MTBF is 1.2 million hours x 2 units
  2. Seagate Cheetah 600GB 10,000 RPM HDD MTBF is 1.5 million hours x 10 units

By using a simple calculation of

    RF (Reliability Factor) = MTBF/#HDDs

the prospect will be able to determine which of the 2 HDD types above could be more reliable.

In case #1, RF is 600,000 hours and in case #2, the RF is 125,000 hours. Suddenly you can see that the Constellation.2 HDDs which has a lower MTBF has a higher RF compared to the Cheetah HDDs. Quick and simple, isn’t it?

Note that I did not use the SAS versus SATA technology into the mixture because they don’t matter. SAS and SATA are merely data channels that drives data in and out of the spinning HDDs. So, folks, don’t be fooled that a SAS drive is more reliable than a SATA drive. Sometimes, they are just the same old spinning HDDs. In fact, the mentioned Seagate Constellation.2 HDD (3TB, 7200 RPM) has both SAS and SATA interface.

Of course, this is just one factor in the whole Reliability universe. Other factors such as RAID-level, checksum, CRC, single or dual-controller also determines the reliability of the entire storage array.

In conclusion, we all know that the MTBF alone does not determine the reliability of the solution the prospect is about to purchase. But this is one way you can use to help the finance people to get the idea of reliability.

Gartner figures about the storage market – Half year report

After the IDC report a couple of weeks back, Gartner released their Worldwide External Controller-Based (ECB) Disk Storage Market report last week. The Gartner reports mirrors the IDC report, which confirms the situation in the storage market, and it’s good news!

Asia Pacific and Latin America are 2 regions which are experiencing tremendous growth, with 27.9% and 22.4% respectively. This means that the demand of storage networking and data management professionals is greater than ever. I have always maintained that it is important for professionals like us to enhance our technical and technology know-how to ride on the storage growth momentum.

So from the report, there are no surprises. Below is a table to summarizes the Gartner report.


As you can see, HP lost market share together with Dell, Fujitsu and Oracle. Oracle is focusing its energies on its Exadata platform (and it’s all about driving more database license sales), and hence their 7000-series is suffering. Despite Fujitsu partnership with NetApp and EMC, and also with its Eternus storage, lost ground as well.

Dell seems to be losing ground too, but that could be the after effects of divorcing EMC after picking up Compellent early this year. Dell should be able to bounce back as there are reports stating that Compellent is picking up a good pace for Dell. One of the reports is here.

The biggest loser of the last quarter is HP. Even though it has a 0.3% of a market drop, things does not seem so rosy as I have been observing their integration of 3PAR since the purchase late last year. No doubt they are firing all cylinders, but 3PAR does not seem to be helping HP to gain market share (yet). The mid-tier has to be addressed as well and having the old-timer EVA at the helm is beginning to show split ends. Good for the hairdresser; not good for HP. IBRIX and LeftHand complete most of HP storage line-up.

HDS is gaining ground as their storage story is beginning to gel quite well. Coupled with some great moves consolidating their services business and also their Deal Operations Center (DOC) in Kuala Lumpur, simplifies the customers doing business with them. Every company has its challenges but I am beginning to see quite a bit of traction from HDS in the local business scene.

IBM also increased market share with a 0.2% jump. Rather tepid overall but I was informed by an IBMer that their DS8000s and XIVs are doing great in the South East Asia Region. Kudos but again IBM still has to transform its mid-tier DS4000/5000 business, which IBM OEMs the storage backend from NetApp Engenio.

EMC and NetApp are the 2 juggernauts. EMC has been king of the hill for many quarters, and I have been always surprised how nimble EMC is, despite being an 800 pound gorilla. NetApp has proven its critics wrong. For many quarters it has been taking market share and that is reflected in the Gartner Half Year Report below:


There you have it folks. The Gartner WW ECB Disk Storage Report. Again, I just want to mention that this is a wonderful opportunity for us doing storage and data management solutions. The demand is there for experienced and skilled professionals but we have to be good, really good to compete with the rest.

NFS deserves more credit from guys doing virtualization

I was at the RedHat Forum last week when I chanced upon a conversation between an attendee and one of the ECS engineers. The conversation went like this

Attendee: Is the RHEV running on SAN or NAS?

ECS Engineer: Oh, for this demo, it is running NFS but in production, you should run iSCSI or Fibre Channel. NFS is only for labs only, not good for production.

Attendee: I see … (and he went off)

I was standing next to them munching my mini-pizza and in my mind, “Oh, come on, NFS is better than that!”

NAS has always played a smaller brother to SAN but usually for the wrong reasons. Perhaps it is the perception that NAS is low-end and not good enough for high-end production systems. However, this is very wrong because NAS has been growing at a faster rate than Fibre Channel, and at the same time Fibre Channel growth has been tapering and possibly on the wane. And I have always said that NAS is a better suited protocol when it comes to unstructured data and files because the NAS protocol is the new storage networking currency of Internet storage and the Cloud (this could change very soon with the REST protocol, but that’s another story). Where else can you find a protocol where sharing is key. iSCSI, even though it has been growing at a faster pace in production storage, cannot be shared easily because it is block-based.

Now back to NFS. NFS version 3 has been around for more than 15 years and has taken its share of bad raps. I agree that this protocol is still very much in the landscape of most NFS installations. But NFS version 4 is changing all that taking on the better parts of the CIFS protocol, notably the equivalent of opportunistic locking or oplocks. In addition to that it has greatly enhanced its security, incorporating Kerberos-type of authentication. As for performance, NFS v4 added in a compounded in a COMPOUND operations for aggregating operations into a single request.

Today, most virtualization solutions from VMware and RedHat works with NFS natively. Note that the Windows CIFS protocol is not supported, only NFS.

This blog entry is not stating that NFS is better than iSCSI or FC but to give NFS credit where credit is due. NFS is not inferior to these block-based protocols. In fact, there are situations where NFS is better, like for instance, expanding the NFS-based datastore on the fly in a VMware implementation. I will use several performance related examples since performance is often used as a yardstick when these protocols are compared.

In an experiment conducted by VMware based on a version 4.0, with all things being equal, below is a series of graphs that compares these 3 protocols (NFS, iSCSI and FC). Note the comparison between NFS and iSCSI rather than FC because NFS and iSCSI run on Gigabit Ethernet, whereas FC is on a different networking platform (hey, if you got the money, go ahead and buy FC!)

Based a one virtual machine (VM), the Read throughput statistics (higher is better) are:


The red circle shows that NFS is up there with iSCSI in terms of read throughput from 4K blocks to 512K blocks. As for write throughput for 1 VM, the graph is shown below:

Even though NFS suffers in write throughput in the smaller blocks less than 16KB, NFS performance write throughput improves over iSCSI when between 16K and 32K range and is equal when it is in 64K, 128K and 512K block tests.

The 2 graphs above are of a single VM. But in most real production environment, a single ESX host will run multiple VMs and here is the throughput graph for multiple VMs.

Again, you can see that in a multiple VMs environment, NFS and iSCSI are equal in throughput, dispelling the notion that NFS is not as good in performance as iSCSI.

Oh, you might say that this is just VMs without any OSes or any applications running in these VMs. Next, I want to share with you another performance testing conducted by VMware for an Microsoft Exchange environment.

The next statistics are produced from an Exchange Load Generator (popularly known as LoadGen) to simulate the load of 16,000 Exchange users running in 8 VMs. With all things being equal again, you will be surprised after you see these graphs.

The graph above shows the average send mail latency of the 3 protocols (lower is better). On the average, NFS has lower latency than iSCSI, better than what most people might think. Another graph shows the 95th percentile of send mail latency below:


Again, you can see that the NFS’s latency is lower than iSCSI. Interesting isn’t it?

What about IOPS then? In another test with an 8-hour DoubleHeavy LoadGen simulator, the IOPS graphs for all 3 protocols are shown below:

In the graph above (higher is better), NFS performed reasonably well compared to the other 2 block-based protocols, and even outperforming iSCSI in this 8-hour load testing. Surprising huh?

As I have shown, NFS is not inferior compared to the block-based protocols such as iSCSI. In fact, VMware in version 4.1 has improved all 3 storage protocols significantly as mentioned in the VMware paper. The following are quoted in the paper for NFS and iSCSI.

  1. Using storage microbenchmarks, we observe that vSphere 4.1 NFS shows improvements in the range of 12–40% for Reads,and improvements in the range of 32–124% for Writes, over 10GbE.
  2. Using storage microbenchmarks, we observe that vSphere 4.1 Software iSCSI shows improvements in the range of 6–23% for Reads, and improvements in the range of 8–19% for Writes, over 10GbE

The performance improvement for NFS is significant when the network infrastructure was 10GbE. The percentage jump between 32-124%! That’s a whopping figure compared to iSCSI which ranged from 8-19%. Since both protocols are neck-to-neck in version 4.0, NFS seems to be taking a bigger lead in version 4.1. With the release of VMware version 5.0 a few weeks ago, we shall know the performance of both NFS and iSCSI soon.

To be fair, NFS does take a higher CPU performance hit compared to iSCSI as the graph below shows:

Also note that the load testing are based on NFS version 3. If version 4 was used, I am sure the performance statistics above will take a whole new plateau.

Therefore, NFS isn’t inferior at all compared to iSCSI, even in a 10GbE environment. We just got to know the facts instead of brushing off NFS.

HDS acquires BlueArc … no surprise

After my early morning exercise routine, I sat down with my laptop hoping to start a new blog entry when a certain HDS news caught my eye. Here’s one of them.

It is of no surprise to me because all along, HDS hardly had a competitive, high-end NAS to compete of their own. Their first Linux-based NAS sucks, and HNAS wasn’t really successful either. But their 5-year OEM with BlueArc gave HDS an strong option to be in the NAS space.

As usual, HDS is as cautious as ever. While the 800-pound EMC has been on a shopping spree for the past 3-4 years, NetApp acquiring a few (note Engenio, Bycast, Akkori, Onaro) along the way, the only notable acquisition made by HDS was Archivas (news here). That was waaaaaay back in 2007. However, what prompted the HDS reaction was a surprise to me. According to Network Computing, it was IBM who wanted to acquire BlueArc, hence triggering HDS to have the first right to fork out the dough for BlueArc.

Why does IBM want to acquire BlueArc? IBM is sliding and lacking the storage array technology of their own. Only XIV and StorWiz(e) are  worth mentioning because their DS-series and N-series belong to NetApp. Their SONAS is pretty much a patchwork of IBM GPFS servers.  In fact, from the same Network Computing article, IBM has terminated their Data DirectNetworks storage back-end and just initiated the sourcing of the storage back-end from NetApp. It is good money to NetApp, but bad for IBM. Their story don’t gel anymore and their platform portfolio staggers as we speak.

This will definitely prompt IBM competitors to sharpen their knives. HP is renewing their artillery with 3PAR and LeftHand, and also IBRIX while Dell is coming out with guns blazing from Compellent, EqualLogic, a bit of Exanet and pretty soon, Ocarina Networks (this is a primary storage deduplication technology). Though Dell lost market share in the last IDC figures, and most likely because of lost EMC sales, they seem to be looking good with Compellent and EqualLogic. HP, is still renewing, and perhaps when they are done ditching their PC business, they should have more focus on the enterprise. Meanwhile, HDS has been winning market share in the last IDC quarter and doing well with their own VSP and AMS series.

HP and Dell have reloaded, and EMC and NetApp coming into the market as storage juggernauts. IBM cannot afford to sit quietly. How long is IBM prepared to do that as the world passes them by?

As for HDS, they are pitching their story together. AMS on the low and mid-end, VSP on the mid to high end. BlueArc fits into the NAS and scale-out NAS space. Yup, they are getting there.

We do not hear much about BlueArc from HDS Malaysia, but be prepared to know more about them soon. Wonder how HDS would rename BlueArc? H-BLU? H-ARC?

Funny Microsoft Cloud video – has Microsoft seen a mirror lately?

I am no virtualization expert, but any IT guy would be able to tell you how far ahead VMware is in the virtualization space. (note that I am not talking about the cloud space)

Virtualization is the cornerstone of Cloud Computing and everyone is claiming they are Cloud-this or Cloud-that. So is Microsoft but when it comes to the virtualization game, Microsoft Hyper-V has much to catch up to, if it ever catches up with VMware.

As what they have done in the past to other technologies (think Netscape), they cast a industry wide suffocation strategy that snuff the lights out of their competitors. But things have changed and new proponents such as Open Source, Mobile Computing and Cloud Computing are not going to be victims of Microsoft aphyxiation strategy (come on, Ballmer, try something new). Hence when I found this video

I found it really funny. It was likely Microsoft was poking at itself, without knowing it.

From the words of Mahatma Gandhi,

“First they ignore you,

then they laugh at you,

then they fight you,

then you win”

This is what Microsoft do best … throwing dirt at the competitor. Hmmm….

Got invited to HP Malaysia’s workshop … he he!

No, HP probably didn’t read my blogs and this isn’t a knee-jerk reaction from HP about things I have been writing. OK, I didn’t write about HP because I don’t know much about them. But this came as a coincidence as well as an apt title (my bad for the shameless plug for this entry’s title).

In my previous blog entry, I wrote about HP’s future in the latest IDC Q2 market share figures. I was not too enthusiastic about HP’s storage line up. Today, my old friend Mr. CC Chung, who is HP’s Country Manager for Storage, had tea with me at Bangsar Shopping Center. We were there to discuss about HP engagement with SNIA when the topic of HP’s storage came about (obviously). Chung said I lack the understanding of HP storage solutions, which I admit, is very true. And so, my friend with his kind gesture invited me to a series of HP Storage Solutions workshops, which I accept with glee and gratitude. Thank you very much, my friend.

Here’s a screenshot of their upcoming workshops:


I am seriously looking forward to the workshop and learn about the vibes of HP Storage Solutions. Too bad there aren’t workshops for HP 3PAR and HP X9000 IBRIX but I am sure this will be the start of my new friendship with HP.

Incidentally, as I was waiting for Chung, I was reading the HWM Magazine August 2011 issue, and lo behold, Chung was in the news announcing the HP X9000 IBRIX and X5000 G2 Network Storage System. I couldn’t find the HWM article online but I found the next best thing. A similar article (online, of course), appeared at CIO Asia. And with a nice picture of Mr. CC Chung too!

EMC and NetApp gaining market share with the latest IDC figures

The IDC 2Q11 global disk storage systems report is out. The good news is data is still growing, and at a tremendous pace as well. Both revenue and capacity have raced ahead with double digit growth, with capacity growth reaching almost 50%.

And not surprisingly to me, EMC and NetApp have gained market share at the expense of HP, IBM and Dell. Here are a couple of statistics tables:

Both EMC and NetApp have recorded more than 25% revenue growth, taking 1st and joint-2nd place respectively. I have always been impressed by both companies.

For EMC, the 800lbs gorilla of the storage market, to be able to get a 26% revenue growth is a massive, massive endorsement of how well EMC execute. They are like a big oil tanker in the rough seas, with the ability to do a 90 degree turn at the blink of an eye. Kudos to Joe Tucci and Pat Gelsinger.

Netapp has always been my “little engine that could”. Their ability to take market share Q-on-Q, Yr-on-Yr is second to none and once again, they did not disappoint. Even with the change of the big man from Dan Warmenhoven to Tom Georgens did not manage a smudge in its armour. And with the purchase of LSI this year, NetApp will go from strength to strength, gaining market share at the other expense. I believe NetApp’s culture plays a big role in their ability and their success. The management has always been honest and frank and there’s a lot of respect of an individual’s ability to contribute. No wonder they are the #5 best company to work for in the US.

The big surprise for me here is Hitachi Data Systems, posting a 23.3% growth. That’s tremendous because HDS has never known to hit such high growth. Perhaps they have finally got the formula right. Their VSP and AMS range must be selling well but again, for HDS, it is a challenge running to 2 different cultural systems within their company. The Japanese team and the US team must be hitting synchronicity at last.

Dell, despite firing all cylinders with EqualLogic and Compellent, actually lost market share. Their partnership with EMC has come to an end and they have not converted their customers to the EqualLogic and Compellent boxes. The Compellent purchase is fairly new (Q1 of 2011) and this will take some time to sink in with their customer. Let’s see how they fare in the next IDC report.

In this table above, HP has always been king of the hill. Bundling their direct attached or internal storage with their servers, just like IBM, has given them an unfair advantage. But for the first time, EMC has outshipped HP, without the presence of DAS and internal storage (which EMC does not sell). Even with the purchase of 3PAR late last year, HP were not able to milk the best of what 3PAR can offer. And not to mention that HP also has LeftHand Networks which now renumbered as the P4000. On the other hand, this is a fantastic result to EMC.

Where’s IBM in all this? Rather anemic, sad to say, compared to EMC and NetApp. IBM’s figures were 1/2 of what EMC and NetApp are posting and this is not good. They don’t have the right weapons to compete. XIV is slowly taking over the mantel of DS8000 as their flagship storage, and their DS series putting up their usual numbers. But that’s not good enough because if you look at the IBM line up, their Shark is pretty much gone. XIV and Storwiz(e) are the only 2 storage platforms that IBM owns. Mind you, Storwiz(e) is not really a primary storage solution. It’s a compression engine. Both the DS-series and N-series actually belongs to LSI (which NetApp owns) and NetApp respectively. So, IBM lacks the IP for storage and in the long run, IBM must do something about it. They must either buy or innovate. They should have bought NetApp when they had the chance in 2002, but today NetApp is becoming an impossible meal to swallow.

We shall see how IBM turns out but if they continue to suffer from anemia, there’s going to be trouble down the road.

As for HP, what can I say? Their XP range is from HDS but with 3PAR in the picture, it looks like the marriage could be ending soon. EVA is an aging platform and they got to refresh it with stronger middle tier platforms. As for the low end of the range, MSA is also something unexciting and I secretly believe that LeftHand should have stepped up. But unfortunately, the HP sales have to be careful not to push MSA and LeftHand side-by-side, and not cannibalizing each other. HP definitely has a challenge in its hands and both 3PAR and LeftHand have been with them for more than 2 quarters. It’s time to execute because the IDC figures have already proved that they are slipping.

What next HP?


All SSDs storage array? There’s more than meets the eye at Pure Storage

Wow, after an entire week off with the holidays, I am back and excited about the many happenings in the storage world.

One of the more prominent news was the announcement of Pure Storage launching its enterprise storage array build entirely with flash-based solid state drives. In addition to that, there were other start-ups who were also offering SSDs storage arrays. The likes of Nimbus Data, Avere, Violin Memory Systems all made the news as well as the grand daddy of solid state storage arrays, Texas Memory Systems.

The first thing that came to my mind was, “Wow, this is great because this will push down the $/GB of SSDs closer to the range of $/GB for spinning disks”. But then skepticism crept in and I thought, “Do we really need an entire enterprise storage array of SSDs? That’s going to cost the world”.

At the same time, we in the storage industry knows that no piece of data are alike. They can be large, small, random, sequential, accessed frequently or infrequently and so on. It is obviously better to tier the storage, using SSDs for Tier 0, 10K/15K RPM spinning HDDs for Tier 1, SATA for Tier 2 and perhaps tape for the archive tier. I was already tempted to write my pessimism on Pure Storage when something interesting caught my attention.

Besides the usual marketing jive of sub-milliseconds, predictable latency, green messaging, global inline deduplication and compression and built-in data integrity into its Purity Operating Environment (POE), I was very surprised to find the team behind Pure Storage. Here’s their line-up

  • Scott Dietzen, CEO – starting from principal technologist of Transarc (sold to IBM), principal architect of Web Logic (sold to BEA Systems), CTO of BEA (sold to Oracle), CTO of Zimbra (sold to Yahoo! and then to VMware)
  • John “Coz” Colgrove, Founder & CTO – Veritas Fellow, CTO of Symantec Data Management group, principal architect of Veritas Volume Manager (VxVM) and Veritas File System (VxFS) and holder of 70 patents
  • John Hayes, Founder & Chief Architect – formerly of  Yahoo! office of Chief Technologist
  • Bob Wood, VP of Engineering – Formerly NetApp’s VP of File System Engineering,
  • Michael Cornwell, Director of Technology & Strategy – formerly the lead technologist of Sun Microsystems’ Sun Storage F5100 Flash Array and also Quantum’s storage architect for their storage telemetry, VTL and DXi solutions
  • Ko Yamamoto, VP of System Engineering – previously NetApp’s director of platform engineering, Quantum DXi director of hardware engineering, and also key contributor to 4-generations of Tandem NonStop technology

In addition to that, there are 3 key individual investors worth mentioning

  • Diane Green – Founder of VMware and former CEO
  • Dr. Mendel Rosenblum – Founder and former Chief Scientist and creator of VMware
  • Frank Slootman – formerly CEO of Data Domain (acquired by EMC)

All these industry big guns are flocking to Pure Storage for a reason and it looks to me that Pure Storage ain’t your ordinary, run-of-the-mill enterprise storage company. There’s definitely more than meet the eye.

On top of the enterprise storage array platform is Pure Storage’s Purity Operating Environment (POE). POE focuses on 3 key storage services which are

  • High Performance Data Reduction
  • Mission Critical Reliability
  • Predictable Sub-millisecond Performance

After going through the deep-dive videos by Pure Storage’s CTO, John Colgrove, they are very much banking the success of their solution around SSDs. Everything that they have done is based on SSDs.  For example, in order to achieve a larger capacity as well as a much cheaper $/GB, the data reduction techniques in global deduplication, high compression and also fine grained thin provision of 512 bytes are used. By trading off IOPS (which SSDs have plenty since they are several times faster than conventional spinning disks), a larger usable capacity is achieved.

In their RAID 3D, they also incorporated several high reliability techniques and data integrity algorithm that are specifically for SSDs. One note that was mentioned was that traditional RAID and especially the parity-based RAID levels were designed in the beginning to protect against an entire device failure. However, in SSDs, the failure does not necessarily occur in the entire device. Because of the way SSDs are built, the failure hotspots tend to happen at the much more granular bit level of the SSDs. The erase-then-write techniques that are inherent in NAND Flash SSDs causes the bit error rate (BER) of the SSD device to go up as the device ages. Therefore, it is more likely to get a read/write error from within the SSDs memory itself rather than having the entire SSD device failing. Pure Storage RAID 3D is meant to address such occurrences of bit errors.

I spoke a bit of storage tiering earlier in this article because every corporation employs storage tiering to be financially responsible. However, John Colgrove’s argument was why tier the storage when there’s plentiful of IOPS and the $/GB is comparable to spinning disks. That is true is when the $/GB of SSDs can match the $/GB of spinning disks. Factors we must also taken into account is the rack-space savings using the smaller profile disks of SSDs, the power-savings costs of SSDs versus conventional HDD-based enterprise storage arrays. In its entirety, there are strong indications that the $/GB of SSD-based systems to match or perhaps lower the $/GB of HDD-based systems. And since the IOPS requirement levels of present-day applications have not demanded super-high IOPS and multi-core processing is cheap, there’s plenty of head-room for Pure Storage and other similar enterprise storage array companies to grow.

The tides are changing for the storage industry and it is good to see a start-up like Pure Storage boldly coming forth to announce their backing for SSDs. It’s good for the consumer and good for the industry. But more importantly, they are driving innovations to rethink of how we build storage arrays. I am looking forward to more things to come.